modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 00:47:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 00:47:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
encku/glc-06-2025
|
encku
| 2025-06-19T10:59:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"base_model:google/vit-large-patch32-384",
"base_model:finetune:google/vit-large-patch32-384",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-19T10:58:00Z |
---
tags:
- autotrain
- transformers
- image-classification
base_model: google/vit-large-patch32-384
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.006576409563422203
f1_macro: 0.9982094907426964
f1_micro: 0.9982098102398854
f1_weighted: 0.9982094907426965
precision_macro: 0.998216473222845
precision_micro: 0.9982098102398854
precision_weighted: 0.9982164732228451
recall_macro: 0.9982098102398854
recall_micro: 0.9982098102398854
recall_weighted: 0.9982098102398854
accuracy: 0.9982098102398854
|
Exclusive-Mezzo-fun-hd-Viral-Videos/Full.Clip.mezzo.fun.Viral.Video.hd.Official
|
Exclusive-Mezzo-fun-hd-Viral-Videos
| 2025-06-19T10:58:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:58:00Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
JustKnow/wav2vec2-large-xlsr-twi
|
JustKnow
| 2025-06-19T10:53:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T10:41:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/deap-dominance-multi-classification-Kfold-5
|
nnilayy
| 2025-06-19T10:50:54Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T10:50:50Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
alana89/TabSTAR-eval-320-version-fold-k2
|
alana89
| 2025-06-19T10:49:32Z | 0 | 0 | null |
[
"safetensors",
"tabstar",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:44:07Z |
---
license: apache-2.0
---
|
yalhessi/lemexp-task1-v2-template_small_notypes-Llama-3.2-1B-ddp-8lr-v2
|
yalhessi
| 2025-06-19T10:47:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-06-19T10:46:47Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-template_small_notypes-Llama-3.2-1B-ddp-8lr-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v2-template_small_notypes-Llama-3.2-1B-ddp-8lr-v2
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.4784 | 0.2001 | 720 | 0.3957 |
| 0.3882 | 0.4001 | 1440 | 0.3714 |
| 0.3441 | 0.6002 | 2160 | 0.3460 |
| 0.3306 | 0.8002 | 2880 | 0.3277 |
| 0.315 | 1.0003 | 3600 | 0.3226 |
| 0.3041 | 1.2003 | 4320 | 0.3118 |
| 0.2948 | 1.4004 | 5040 | 0.3023 |
| 0.2915 | 1.6004 | 5760 | 0.3064 |
| 0.2893 | 1.8005 | 6480 | 0.2940 |
| 0.2831 | 2.0006 | 7200 | 0.2856 |
| 0.2725 | 2.2006 | 7920 | 0.2785 |
| 0.2696 | 2.4007 | 8640 | 0.2815 |
| 0.2633 | 2.6007 | 9360 | 0.2690 |
| 0.2626 | 2.8008 | 10080 | 0.2787 |
| 0.2622 | 3.0008 | 10800 | 0.2676 |
| 0.2482 | 3.2009 | 11520 | 0.2641 |
| 0.2518 | 3.4009 | 12240 | 0.2613 |
| 0.2447 | 3.6010 | 12960 | 0.2664 |
| 0.2433 | 3.8011 | 13680 | 0.2579 |
| 0.2413 | 4.0011 | 14400 | 0.2507 |
| 0.2311 | 4.2012 | 15120 | 0.2530 |
| 0.2301 | 4.4012 | 15840 | 0.2455 |
| 0.2258 | 4.6013 | 16560 | 0.2434 |
| 0.2266 | 4.8013 | 17280 | 0.2432 |
| 0.2209 | 5.0014 | 18000 | 0.2397 |
| 0.2136 | 5.2014 | 18720 | 0.2398 |
| 0.212 | 5.4015 | 19440 | 0.2313 |
| 0.2108 | 5.6016 | 20160 | 0.2345 |
| 0.2113 | 5.8016 | 20880 | 0.2331 |
| 0.2065 | 6.0017 | 21600 | 0.2213 |
| 0.1975 | 6.2017 | 22320 | 0.2153 |
| 0.1988 | 6.4018 | 23040 | 0.2151 |
| 0.1931 | 6.6018 | 23760 | 0.2157 |
| 0.1947 | 6.8019 | 24480 | 0.2185 |
| 0.1917 | 7.0019 | 25200 | 0.2139 |
| 0.1828 | 7.2020 | 25920 | 0.2118 |
| 0.1819 | 7.4021 | 26640 | 0.2120 |
| 0.1798 | 7.6021 | 27360 | 0.2044 |
| 0.1762 | 7.8022 | 28080 | 0.2013 |
| 0.1794 | 8.0022 | 28800 | 0.1981 |
| 0.1652 | 8.2023 | 29520 | 0.1974 |
| 0.1629 | 8.4023 | 30240 | 0.1958 |
| 0.1642 | 8.6024 | 30960 | 0.1969 |
| 0.1607 | 8.8024 | 31680 | 0.1891 |
| 0.1611 | 9.0025 | 32400 | 0.1878 |
| 0.15 | 9.2026 | 33120 | 0.1866 |
| 0.1465 | 9.4026 | 33840 | 0.1838 |
| 0.1474 | 9.6027 | 34560 | 0.1828 |
| 0.1468 | 9.8027 | 35280 | 0.1776 |
| 0.1416 | 10.0028 | 36000 | 0.1768 |
| 0.1309 | 10.2028 | 36720 | 0.1770 |
| 0.1309 | 10.4029 | 37440 | 0.1770 |
| 0.1296 | 10.6029 | 38160 | 0.1755 |
| 0.1297 | 10.8030 | 38880 | 0.1723 |
| 0.1297 | 11.0031 | 39600 | 0.1694 |
| 0.1185 | 11.2031 | 40320 | 0.1716 |
| 0.1156 | 11.4032 | 41040 | 0.1692 |
| 0.1142 | 11.6032 | 41760 | 0.1691 |
| 0.1126 | 11.8033 | 42480 | 0.1674 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
louashka/ppo-LunarLander-v2
|
louashka
| 2025-06-19T10:45:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T10:45:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.31 +/- 18.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
svjack/Step1X-Anime-Edit-Lora
|
svjack
| 2025-06-19T10:44:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:56:26Z |
# Step1X-Anime-Edit-Lora
This repository provides LoRA (Low-Rank Adaptation) support for the Step1X-Edit anime image editing model. It allows for fine-tuned control over image editing outputs.
## Installation
Refer to the main Step1X-Edit installation instructions at:
https://github.com/stepfun-ai/Step1X-Edit
```bash
wget https://huggingface.co/stepfun-ai/Step1X-Edit/resolve/main/step1x-edit-i1258.safetensors
wget https://huggingface.co/stepfun-ai/Step1X-Edit/resolve/main/vae.safetensors
huggingface-cli download Qwen/Qwen2.5-VL-7B-Instruct --local-dir Qwen2.5-VL-7B-Instruct
```
## Usage Examples
### Basic Setup
```python
from inference import *
image_edit = ImageGenerator(
ae_path="vae.safetensors",
dit_path="step1x-edit-i1258.safetensors",
qwen2vl_model_path='Qwen2.5-VL-7B-Instruct',
max_length=640,
quantized=True,
offload=True,
lora="change_output/step1x-edit_change-step00003000.safetensors",
mode="flash"
)
```
### Example 1: Changing Background and Adding Elements

```python
image_path = "万叶.png"
prompt = '''
将背景改成公园,添加一些小松鼠
'''
num_steps = 28
cfg_guidance = 4.5
seed = 42
size_level = 512 # Can also be 768 or 1024
image = image_edit.generate_image(
prompt,
negative_prompt="",
ref_images=Image.open(image_path).convert("RGB"),
num_samples=1,
num_steps=num_steps,
cfg_guidance=cfg_guidance,
seed=seed,
show_progress=True,
size_level=size_level,
)[0]
image.save("万叶在公园.png")
```
- original output

- lora output

### Example 2: Advanced Scene Modification

```python
image_path = "万叶.png"
prompt = '''
将背景改成公园,添加一些小松鼠,天气为黄昏,调整为橙色光照,让男孩微笑
'''
# Same parameters as above
image = image_edit.generate_image(...)
image.save("万叶在黄昏.png")
```
- original output

- lora output

### Example 3: Character Modification

```python
image_path = "塔利亚.jpg"
prompt = '''
将图片背景变成海边,手里拿着一个冰淇凌
'''
num_steps = 28
cfg_guidance = 6 # Higher guidance for more complex changes
seed = 42
size_level = 512
image = image_edit.generate_image(...)
image.save("塔利亚在海边.jpg")
```
- original output

- lora output

### Example 4: Object Replacement and Style Change

```python
image_path = "星铁海报.jpg"
prompt = '''
将桌子上的鞋替换成一个汉堡,背景换成星光咖啡厅,帽子换成小熊帽
'''
num_steps = 28
cfg_guidance = 4.5
seed = 42
size_level = 512
image = image_edit.generate_image(...)
image.save("星铁小猫在咖啡厅.png")
```
- original output

- lora output

## Parameters
- `num_steps`: Number of diffusion steps (typically 28)
- `cfg_guidance`: Guidance scale (4.5-6 recommended)
- `seed`: Random seed for reproducibility
- `size_level`: Output resolution (512)
## Output Comparison
Each example shows the original output vs. LoRA-enhanced output for comparison.
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed28-2025-06-19
|
morturr
| 2025-06-19T10:44:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T10:43:53Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
videos-Sajal-Malik-Viral-Video-Original/FULL.VIDEO.Sajal.Malik.viral.video.Link.viral.On.Social.Media.Official
|
videos-Sajal-Malik-Viral-Video-Original
| 2025-06-19T10:41:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:41:04Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
neural-interactive-proofs/finetune_dpo_cv_test_lm_server_45_0_iter_0_provers_group_2025-06-19_11-38-10_Qwen_Qwen2.5-0.5B-I
|
neural-interactive-proofs
| 2025-06-19T10:39:10Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T10:38:58Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: finetune_dpo_cv_test_lm_server_45_0_iter_0_provers_group_2025-06-19_11-38-10_Qwen_Qwen2.5-0.5B-I
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_cv_test_lm_server_45_0_iter_0_provers_group_2025-06-19_11-38-10_Qwen_Qwen2.5-0.5B-I
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_test_lm_server_45_0_iter_0_provers_group_2025-06-19_11-38-10_Qwen_Qwen2.5-0.5B-I", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-0.5B-Instruct_dpo_2025-06-19_11-38-10_cv_test_lm_server_45_0_iter_0_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Khruna/Jack
|
Khruna
| 2025-06-19T10:38:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:37:54Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Jack
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Jack/tree/main) them in the Files & versions tab.
|
New-tutorial-guru-salsa-18-Viral-Videos/FULL.VIDEO.guru.salsa.Viral.Video.Tutorial.Official
|
New-tutorial-guru-salsa-18-Viral-Videos
| 2025-06-19T10:37:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:36:39Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
tomaarsen/splade-distilbert-base-uncased-quora-duplicates
|
tomaarsen
| 2025-06-19T10:36:10Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"distilbert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:99000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/quora-duplicates",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T10:36:01Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:99000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: distilbert/distilbert-base-uncased
widget:
- text: How do I know if a girl likes me at school?
- text: What are some five star hotel in Jaipur?
- text: Is it normal to fantasize your wife having sex with another man?
- text: What is the Sahara, and how do the average temperatures there compare to the
ones in the Simpson Desert?
- text: What are Hillary Clinton's most recognized accomplishments while Secretary
of State?
datasets:
- sentence-transformers/quora-duplicates
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- dot_mcc
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- euclidean_mcc
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- manhattan_mcc
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
- max_mcc
- active_dims
- sparsity_ratio
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 29.19330199735101
energy_consumed: 0.07510458396754072
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.306
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-distilbert-base-uncased trained on Quora Duplicates Questions
results:
- task:
type: sparse-binary-classification
name: Sparse Binary Classification
dataset:
name: quora duplicates dev
type: quora_duplicates_dev
metrics:
- type: cosine_accuracy
value: 0.759
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8012633323669434
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.6741573033707865
name: Cosine F1
- type: cosine_f1_threshold
value: 0.542455792427063
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.528169014084507
name: Cosine Precision
- type: cosine_recall
value: 0.9316770186335404
name: Cosine Recall
- type: cosine_ap
value: 0.6875984052094628
name: Cosine Ap
- type: cosine_mcc
value: 0.5059561809366392
name: Cosine Mcc
- type: dot_accuracy
value: 0.754
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 47.276466369628906
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.6759581881533101
name: Dot F1
- type: dot_f1_threshold
value: 40.955284118652344
name: Dot F1 Threshold
- type: dot_precision
value: 0.5398886827458256
name: Dot Precision
- type: dot_recall
value: 0.9037267080745341
name: Dot Recall
- type: dot_ap
value: 0.6070585464263578
name: Dot Ap
- type: dot_mcc
value: 0.5042382773971489
name: Dot Mcc
- type: euclidean_accuracy
value: 0.677
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: -14.295218467712402
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.48599545798637395
name: Euclidean F1
- type: euclidean_f1_threshold
value: -0.5385364294052124
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.3213213213213213
name: Euclidean Precision
- type: euclidean_recall
value: 0.9968944099378882
name: Euclidean Recall
- type: euclidean_ap
value: 0.20430811061248494
name: Euclidean Ap
- type: euclidean_mcc
value: -0.04590966956831287
name: Euclidean Mcc
- type: manhattan_accuracy
value: 0.677
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: -163.6865234375
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.48599545798637395
name: Manhattan F1
- type: manhattan_f1_threshold
value: -2.7509355545043945
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.3213213213213213
name: Manhattan Precision
- type: manhattan_recall
value: 0.9968944099378882
name: Manhattan Recall
- type: manhattan_ap
value: 0.20563864564607998
name: Manhattan Ap
- type: manhattan_mcc
value: -0.04590966956831287
name: Manhattan Mcc
- type: max_accuracy
value: 0.759
name: Max Accuracy
- type: max_accuracy_threshold
value: 47.276466369628906
name: Max Accuracy Threshold
- type: max_f1
value: 0.6759581881533101
name: Max F1
- type: max_f1_threshold
value: 40.955284118652344
name: Max F1 Threshold
- type: max_precision
value: 0.5398886827458256
name: Max Precision
- type: max_recall
value: 0.9968944099378882
name: Max Recall
- type: max_ap
value: 0.6875984052094628
name: Max Ap
- type: max_mcc
value: 0.5059561809366392
name: Max Mcc
- type: active_dims
value: 83.36341094970703
name: Active Dims
- type: sparsity_ratio
value: 0.9972687434981421
name: Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.56
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.14666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.11200000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.24
name: Dot Recall@1
- type: dot_recall@3
value: 0.44
name: Dot Recall@3
- type: dot_recall@5
value: 0.56
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.46883808093835555
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3849920634920634
name: Dot Mrr@10
- type: dot_map@100
value: 0.39450094910993877
name: Dot Map@100
- type: query_active_dims
value: 84.87999725341797
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9972190551977781
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 104.35554504394531
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9965809729033503
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.24
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.74
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.24
name: Dot Precision@1
- type: dot_precision@3
value: 0.14666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.12000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.07400000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.24
name: Dot Recall@1
- type: dot_recall@3
value: 0.44
name: Dot Recall@3
- type: dot_recall@5
value: 0.6
name: Dot Recall@5
- type: dot_recall@10
value: 0.74
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.46663046446554135
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3821587301587301
name: Dot Mrr@10
- type: dot_map@100
value: 0.39141822290426725
name: Dot Map@100
- type: query_active_dims
value: 94.9000015258789
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9968907672653863
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 115.97699737548828
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9962002163234556
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.18
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.44
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.52
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.58
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.18
name: Dot Precision@1
- type: dot_precision@3
value: 0.14666666666666667
name: Dot Precision@3
- type: dot_precision@5
value: 0.10400000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.06000000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.17
name: Dot Recall@1
- type: dot_recall@3
value: 0.41
name: Dot Recall@3
- type: dot_recall@5
value: 0.48
name: Dot Recall@5
- type: dot_recall@10
value: 0.55
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.3711173352982992
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.32435714285714284
name: Dot Mrr@10
- type: dot_map@100
value: 0.32104591506684527
name: Dot Map@100
- type: query_active_dims
value: 76.81999969482422
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9974831269348396
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 139.53028869628906
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9954285338871539
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.18
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.46
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.64
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.18
name: Dot Precision@1
- type: dot_precision@3
value: 0.1533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.10000000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.066
name: Dot Precision@10
- type: dot_recall@1
value: 0.17
name: Dot Recall@1
- type: dot_recall@3
value: 0.43
name: Dot Recall@3
- type: dot_recall@5
value: 0.46
name: Dot Recall@5
- type: dot_recall@10
value: 0.61
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.39277722565932277
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.33549999999999996
name: Dot Mrr@10
- type: dot_map@100
value: 0.3266050492721919
name: Dot Map@100
- type: query_active_dims
value: 85.72000122070312
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971915339354989
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 156.10665893554688
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.994885438079564
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.28
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.42
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.46
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.28
name: Dot Precision@1
- type: dot_precision@3
value: 0.24
name: Dot Precision@3
- type: dot_precision@5
value: 0.2
name: Dot Precision@5
- type: dot_precision@10
value: 0.16
name: Dot Precision@10
- type: dot_recall@1
value: 0.010055870806195594
name: Dot Recall@1
- type: dot_recall@3
value: 0.03299225609257712
name: Dot Recall@3
- type: dot_recall@5
value: 0.043240249260663235
name: Dot Recall@5
- type: dot_recall@10
value: 0.0575687615260951
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.1901013298743406
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3606904761904762
name: Dot Mrr@10
- type: dot_map@100
value: 0.06747201795263198
name: Dot Map@100
- type: query_active_dims
value: 92.18000030517578
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9969798833528217
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 196.1699981689453
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.993572832770823
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.3
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.42
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.48
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3
name: Dot Precision@1
- type: dot_precision@3
value: 0.24666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.21600000000000003
name: Dot Precision@5
- type: dot_precision@10
value: 0.174
name: Dot Precision@10
- type: dot_recall@1
value: 0.020055870806195596
name: Dot Recall@1
- type: dot_recall@3
value: 0.03516880470242261
name: Dot Recall@3
- type: dot_recall@5
value: 0.07436160102717629
name: Dot Recall@5
- type: dot_recall@10
value: 0.08924749441772001
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2174721143005973
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.3753888888888888
name: Dot Mrr@10
- type: dot_map@100
value: 0.08327101018955965
name: Dot Map@100
- type: query_active_dims
value: 101.91999816894531
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9966607693411655
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 217.09109497070312
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9928873895887982
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.96
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.38666666666666655
name: Dot Precision@3
- type: dot_precision@5
value: 0.24799999999999997
name: Dot Precision@5
- type: dot_precision@10
value: 0.13599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.804
name: Dot Recall@1
- type: dot_recall@3
value: 0.9053333333333333
name: Dot Recall@3
- type: dot_recall@5
value: 0.9326666666666666
name: Dot Recall@5
- type: dot_recall@10
value: 0.99
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.940813094731721
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9366666666666665
name: Dot Mrr@10
- type: dot_map@100
value: 0.9174399766899767
name: Dot Map@100
- type: query_active_dims
value: 80.30000305175781
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973691107053353
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 83.33353424072266
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9972697223563096
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 1.0
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.38666666666666655
name: Dot Precision@3
- type: dot_precision@5
value: 0.25599999999999995
name: Dot Precision@5
- type: dot_precision@10
value: 0.13599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.804
name: Dot Recall@1
- type: dot_recall@3
value: 0.9086666666666667
name: Dot Recall@3
- type: dot_recall@5
value: 0.97
name: Dot Recall@5
- type: dot_recall@10
value: 0.99
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9434418368741703
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.94
name: Dot Mrr@10
- type: dot_map@100
value: 0.9210437710437711
name: Dot Map@100
- type: query_active_dims
value: 87.4000015258789
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9971364916609043
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 90.32620239257812
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.997040619802353
name: Corpus Sparsity Ratio
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.4
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.565
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.625
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.71
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4
name: Dot Precision@1
- type: dot_precision@3
value: 0.22999999999999998
name: Dot Precision@3
- type: dot_precision@5
value: 0.166
name: Dot Precision@5
- type: dot_precision@10
value: 0.10750000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.30601396770154893
name: Dot Recall@1
- type: dot_recall@3
value: 0.4470813973564776
name: Dot Recall@3
- type: dot_recall@5
value: 0.5039767289818324
name: Dot Recall@5
- type: dot_recall@10
value: 0.5843921903815238
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4927174602106791
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5016765873015872
name: Dot Mrr@10
- type: dot_map@100
value: 0.4251147147048482
name: Dot Map@100
- type: query_active_dims
value: 83.54500007629395
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9972627940476937
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 123.28323480743562
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9959608402199255
name: Corpus Sparsity Ratio
- type: dot_accuracy@1
value: 0.4021664050235479
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.5765463108320251
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.6598116169544741
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7337833594976453
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.4021664050235479
name: Dot Precision@1
- type: dot_precision@3
value: 0.25656724228152794
name: Dot Precision@3
- type: dot_precision@5
value: 0.20182103610675042
name: Dot Precision@5
- type: dot_precision@10
value: 0.14312715855572997
name: Dot Precision@10
- type: dot_recall@1
value: 0.23408727816164185
name: Dot Recall@1
- type: dot_recall@3
value: 0.3568914414902249
name: Dot Recall@3
- type: dot_recall@5
value: 0.4275402562349963
name: Dot Recall@5
- type: dot_recall@10
value: 0.5040607961406979
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.45167521970189345
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5088102589020956
name: Dot Mrr@10
- type: dot_map@100
value: 0.37853024172675503
name: Dot Map@100
- type: query_active_dims
value: 105.61787400444042
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9965396149005816
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 163.73635361872905
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9946354644643625
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: dot_accuracy@1
value: 0.14
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.32
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.42
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.52
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.14
name: Dot Precision@1
- type: dot_precision@3
value: 0.11333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.09200000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.064
name: Dot Precision@10
- type: dot_recall@1
value: 0.07166666666666666
name: Dot Recall@1
- type: dot_recall@3
value: 0.14833333333333332
name: Dot Recall@3
- type: dot_recall@5
value: 0.19
name: Dot Recall@5
- type: dot_recall@10
value: 0.25
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.1928494772790168
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.2526666666666666
name: Dot Mrr@10
- type: dot_map@100
value: 0.14153388517603807
name: Dot Map@100
- type: query_active_dims
value: 102.33999633789062
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9966470088350079
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 217.80722045898438
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9928639269884351
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: dot_accuracy@1
value: 0.56
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.78
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.56
name: Dot Precision@1
- type: dot_precision@3
value: 0.5133333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.488
name: Dot Precision@5
- type: dot_precision@10
value: 0.436
name: Dot Precision@10
- type: dot_recall@1
value: 0.042268334576683116
name: Dot Recall@1
- type: dot_recall@3
value: 0.1179684188048045
name: Dot Recall@3
- type: dot_recall@5
value: 0.17514937366700764
name: Dot Recall@5
- type: dot_recall@10
value: 0.2739338942789917
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5024388532207343
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.6801666666666667
name: Dot Mrr@10
- type: dot_map@100
value: 0.38220472918007364
name: Dot Map@100
- type: query_active_dims
value: 79.80000305175781
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9973854923317031
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 146.68072509765625
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.995194262332165
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: dot_accuracy@1
value: 0.64
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.72
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.82
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.88
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.64
name: Dot Precision@1
- type: dot_precision@3
value: 0.2533333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.176
name: Dot Precision@5
- type: dot_precision@10
value: 0.09399999999999999
name: Dot Precision@10
- type: dot_recall@1
value: 0.6066666666666667
name: Dot Recall@1
- type: dot_recall@3
value: 0.7033333333333333
name: Dot Recall@3
- type: dot_recall@5
value: 0.8033333333333332
name: Dot Recall@5
- type: dot_recall@10
value: 0.8633333333333333
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.7368677901493659
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.7063809523809523
name: Dot Mrr@10
- type: dot_map@100
value: 0.697561348294107
name: Dot Map@100
- type: query_active_dims
value: 104.22000122070312
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9965854137598879
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 228.74359130859375
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9925056159062776
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: dot_accuracy@1
value: 0.2
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.28
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.4
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.46
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.2
name: Dot Precision@1
- type: dot_precision@3
value: 0.12666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.10400000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.07
name: Dot Precision@10
- type: dot_recall@1
value: 0.09469047619047619
name: Dot Recall@1
- type: dot_recall@3
value: 0.15076984126984128
name: Dot Recall@3
- type: dot_recall@5
value: 0.25362698412698415
name: Dot Recall@5
- type: dot_recall@10
value: 0.3211825396825397
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.23331922670891586
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.27135714285714285
name: Dot Mrr@10
- type: dot_map@100
value: 0.18392178053045694
name: Dot Map@100
- type: query_active_dims
value: 89.73999786376953
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9970598257694853
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 131.34085083007812
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9956968465097282
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: dot_accuracy@1
value: 0.8
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.9
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.92
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.94
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.8
name: Dot Precision@1
- type: dot_precision@3
value: 0.3933333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.264
name: Dot Precision@5
- type: dot_precision@10
value: 0.14200000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.4
name: Dot Recall@1
- type: dot_recall@3
value: 0.59
name: Dot Recall@3
- type: dot_recall@5
value: 0.66
name: Dot Recall@5
- type: dot_recall@10
value: 0.71
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.6848748058213975
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8541666666666665
name: Dot Mrr@10
- type: dot_map@100
value: 0.6060670580971632
name: Dot Map@100
- type: query_active_dims
value: 111.23999786376953
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9963554158356671
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 166.19056701660156
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9945550564505407
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: dot_accuracy@1
value: 0.34
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.56
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.66
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.34
name: Dot Precision@1
- type: dot_precision@3
value: 0.26
name: Dot Precision@3
- type: dot_precision@5
value: 0.2
name: Dot Precision@5
- type: dot_precision@10
value: 0.14200000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.07166666666666668
name: Dot Recall@1
- type: dot_recall@3
value: 0.16066666666666665
name: Dot Recall@3
- type: dot_recall@5
value: 0.20566666666666664
name: Dot Recall@5
- type: dot_recall@10
value: 0.2916666666666667
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.2850130343263586
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.47407142857142853
name: Dot Mrr@10
- type: dot_map@100
value: 0.20070977606957205
name: Dot Map@100
- type: query_active_dims
value: 113.77999877929688
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9962721971437226
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 226.21810913085938
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9925883589171464
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: dot_accuracy@1
value: 0.08
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.32
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.38
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.44
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.08
name: Dot Precision@1
- type: dot_precision@3
value: 0.10666666666666666
name: Dot Precision@3
- type: dot_precision@5
value: 0.07600000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.044000000000000004
name: Dot Precision@10
- type: dot_recall@1
value: 0.08
name: Dot Recall@1
- type: dot_recall@3
value: 0.32
name: Dot Recall@3
- type: dot_recall@5
value: 0.38
name: Dot Recall@5
- type: dot_recall@10
value: 0.44
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.26512761684329256
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.20850000000000002
name: Dot Mrr@10
- type: dot_map@100
value: 0.2135415485154769
name: Dot Map@100
- type: query_active_dims
value: 202.02000427246094
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9933811675423477
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 176.61155700683594
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.994213630921734
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: dot_accuracy@1
value: 0.44
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.58
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.7
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.78
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.44
name: Dot Precision@1
- type: dot_precision@3
value: 0.19999999999999996
name: Dot Precision@3
- type: dot_precision@5
value: 0.14800000000000002
name: Dot Precision@5
- type: dot_precision@10
value: 0.08599999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.415
name: Dot Recall@1
- type: dot_recall@3
value: 0.55
name: Dot Recall@3
- type: dot_recall@5
value: 0.665
name: Dot Recall@5
- type: dot_recall@10
value: 0.76
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5848481832222858
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5400476190476191
name: Dot Mrr@10
- type: dot_map@100
value: 0.5247408283859897
name: Dot Map@100
- type: query_active_dims
value: 102.4800033569336
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9966424217496581
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 216.64508056640625
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9929020024714499
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: dot_accuracy@1
value: 0.40816326530612246
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.7551020408163265
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.8775510204081632
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.9591836734693877
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.40816326530612246
name: Dot Precision@1
- type: dot_precision@3
value: 0.43537414965986393
name: Dot Precision@3
- type: dot_precision@5
value: 0.38367346938775504
name: Dot Precision@5
- type: dot_precision@10
value: 0.3326530612244898
name: Dot Precision@10
- type: dot_recall@1
value: 0.027119934527989286
name: Dot Recall@1
- type: dot_recall@3
value: 0.08468167459585536
name: Dot Recall@3
- type: dot_recall@5
value: 0.12088537223378343
name: Dot Recall@5
- type: dot_recall@10
value: 0.21342642144981977
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.36611722725361623
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.5941286038224813
name: Dot Mrr@10
- type: dot_map@100
value: 0.24827413478914825
name: Dot Map@100
- type: query_active_dims
value: 97.30612182617188
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9968119349378752
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 147.016357421875
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9951832659255005
name: Corpus Sparsity Ratio
---
# splade-distilbert-base-uncased trained on Quora Duplicates Questions
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) <!-- at revision 12040accade4e8a0f71eabdb258fecc2e7e948be -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-distilbert-base-uncased-quora-duplicates")
# Run inference
sentences = [
'What accomplishments did Hillary Clinton achieve during her time as Secretary of State?',
"What are Hillary Clinton's most recognized accomplishments while Secretary of State?",
'What are Hillary Clinton’s qualifications to be President?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 83.9635, 60.9402, 26.0887],
# [ 60.9402, 85.6474, 33.3293],
# [ 26.0887, 33.3293, 104.0980]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Binary Classification
* Dataset: `quora_duplicates_dev`
* Evaluated with [<code>SparseBinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseBinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.759 |
| cosine_accuracy_threshold | 0.8013 |
| cosine_f1 | 0.6742 |
| cosine_f1_threshold | 0.5425 |
| cosine_precision | 0.5282 |
| cosine_recall | 0.9317 |
| cosine_ap | 0.6876 |
| cosine_mcc | 0.506 |
| dot_accuracy | 0.754 |
| dot_accuracy_threshold | 47.2765 |
| dot_f1 | 0.676 |
| dot_f1_threshold | 40.9553 |
| dot_precision | 0.5399 |
| dot_recall | 0.9037 |
| dot_ap | 0.6071 |
| dot_mcc | 0.5042 |
| euclidean_accuracy | 0.677 |
| euclidean_accuracy_threshold | -14.2952 |
| euclidean_f1 | 0.486 |
| euclidean_f1_threshold | -0.5385 |
| euclidean_precision | 0.3213 |
| euclidean_recall | 0.9969 |
| euclidean_ap | 0.2043 |
| euclidean_mcc | -0.0459 |
| manhattan_accuracy | 0.677 |
| manhattan_accuracy_threshold | -163.6865 |
| manhattan_f1 | 0.486 |
| manhattan_f1_threshold | -2.7509 |
| manhattan_precision | 0.3213 |
| manhattan_recall | 0.9969 |
| manhattan_ap | 0.2056 |
| manhattan_mcc | -0.0459 |
| max_accuracy | 0.759 |
| max_accuracy_threshold | 47.2765 |
| max_f1 | 0.676 |
| max_f1_threshold | 40.9553 |
| max_precision | 0.5399 |
| max_recall | 0.9969 |
| **max_ap** | **0.6876** |
| max_mcc | 0.506 |
| active_dims | 83.3634 |
| sparsity_ratio | 0.9973 |
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNQ`, `NanoNFCorpus`, `NanoQuoraRetrieval`, `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNQ | NanoNFCorpus | NanoQuoraRetrieval | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:----------------------|:------------|:-----------|:-------------|:-------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:------------|:------------|:---------------|
| dot_accuracy@1 | 0.24 | 0.18 | 0.3 | 0.9 | 0.14 | 0.56 | 0.64 | 0.2 | 0.8 | 0.34 | 0.08 | 0.44 | 0.4082 |
| dot_accuracy@3 | 0.44 | 0.46 | 0.42 | 0.96 | 0.32 | 0.78 | 0.72 | 0.28 | 0.9 | 0.56 | 0.32 | 0.58 | 0.7551 |
| dot_accuracy@5 | 0.6 | 0.5 | 0.48 | 1.0 | 0.42 | 0.82 | 0.82 | 0.4 | 0.92 | 0.66 | 0.38 | 0.7 | 0.8776 |
| dot_accuracy@10 | 0.74 | 0.64 | 0.52 | 1.0 | 0.52 | 0.88 | 0.88 | 0.46 | 0.94 | 0.78 | 0.44 | 0.78 | 0.9592 |
| dot_precision@1 | 0.24 | 0.18 | 0.3 | 0.9 | 0.14 | 0.56 | 0.64 | 0.2 | 0.8 | 0.34 | 0.08 | 0.44 | 0.4082 |
| dot_precision@3 | 0.1467 | 0.1533 | 0.2467 | 0.3867 | 0.1133 | 0.5133 | 0.2533 | 0.1267 | 0.3933 | 0.26 | 0.1067 | 0.2 | 0.4354 |
| dot_precision@5 | 0.12 | 0.1 | 0.216 | 0.256 | 0.092 | 0.488 | 0.176 | 0.104 | 0.264 | 0.2 | 0.076 | 0.148 | 0.3837 |
| dot_precision@10 | 0.074 | 0.066 | 0.174 | 0.136 | 0.064 | 0.436 | 0.094 | 0.07 | 0.142 | 0.142 | 0.044 | 0.086 | 0.3327 |
| dot_recall@1 | 0.24 | 0.17 | 0.0201 | 0.804 | 0.0717 | 0.0423 | 0.6067 | 0.0947 | 0.4 | 0.0717 | 0.08 | 0.415 | 0.0271 |
| dot_recall@3 | 0.44 | 0.43 | 0.0352 | 0.9087 | 0.1483 | 0.118 | 0.7033 | 0.1508 | 0.59 | 0.1607 | 0.32 | 0.55 | 0.0847 |
| dot_recall@5 | 0.6 | 0.46 | 0.0744 | 0.97 | 0.19 | 0.1751 | 0.8033 | 0.2536 | 0.66 | 0.2057 | 0.38 | 0.665 | 0.1209 |
| dot_recall@10 | 0.74 | 0.61 | 0.0892 | 0.99 | 0.25 | 0.2739 | 0.8633 | 0.3212 | 0.71 | 0.2917 | 0.44 | 0.76 | 0.2134 |
| **dot_ndcg@10** | **0.4666** | **0.3928** | **0.2175** | **0.9434** | **0.1928** | **0.5024** | **0.7369** | **0.2333** | **0.6849** | **0.285** | **0.2651** | **0.5848** | **0.3661** |
| dot_mrr@10 | 0.3822 | 0.3355 | 0.3754 | 0.94 | 0.2527 | 0.6802 | 0.7064 | 0.2714 | 0.8542 | 0.4741 | 0.2085 | 0.54 | 0.5941 |
| dot_map@100 | 0.3914 | 0.3266 | 0.0833 | 0.921 | 0.1415 | 0.3822 | 0.6976 | 0.1839 | 0.6061 | 0.2007 | 0.2135 | 0.5247 | 0.2483 |
| query_active_dims | 94.9 | 85.72 | 101.92 | 87.4 | 102.34 | 79.8 | 104.22 | 89.74 | 111.24 | 113.78 | 202.02 | 102.48 | 97.3061 |
| query_sparsity_ratio | 0.9969 | 0.9972 | 0.9967 | 0.9971 | 0.9966 | 0.9974 | 0.9966 | 0.9971 | 0.9964 | 0.9963 | 0.9934 | 0.9966 | 0.9968 |
| corpus_active_dims | 115.977 | 156.1067 | 217.0911 | 90.3262 | 217.8072 | 146.6807 | 228.7436 | 131.3409 | 166.1906 | 226.2181 | 176.6116 | 216.6451 | 147.0164 |
| corpus_sparsity_ratio | 0.9962 | 0.9949 | 0.9929 | 0.997 | 0.9929 | 0.9952 | 0.9925 | 0.9957 | 0.9946 | 0.9926 | 0.9942 | 0.9929 | 0.9952 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nq",
"nfcorpus",
"quoraretrieval"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4 |
| dot_accuracy@3 | 0.565 |
| dot_accuracy@5 | 0.625 |
| dot_accuracy@10 | 0.71 |
| dot_precision@1 | 0.4 |
| dot_precision@3 | 0.23 |
| dot_precision@5 | 0.166 |
| dot_precision@10 | 0.1075 |
| dot_recall@1 | 0.306 |
| dot_recall@3 | 0.4471 |
| dot_recall@5 | 0.504 |
| dot_recall@10 | 0.5844 |
| **dot_ndcg@10** | **0.4927** |
| dot_mrr@10 | 0.5017 |
| dot_map@100 | 0.4251 |
| query_active_dims | 83.545 |
| query_sparsity_ratio | 0.9973 |
| corpus_active_dims | 123.2832 |
| corpus_sparsity_ratio | 0.996 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:----------------------|:-----------|
| dot_accuracy@1 | 0.4022 |
| dot_accuracy@3 | 0.5765 |
| dot_accuracy@5 | 0.6598 |
| dot_accuracy@10 | 0.7338 |
| dot_precision@1 | 0.4022 |
| dot_precision@3 | 0.2566 |
| dot_precision@5 | 0.2018 |
| dot_precision@10 | 0.1431 |
| dot_recall@1 | 0.2341 |
| dot_recall@3 | 0.3569 |
| dot_recall@5 | 0.4275 |
| dot_recall@10 | 0.5041 |
| **dot_ndcg@10** | **0.4517** |
| dot_mrr@10 | 0.5088 |
| dot_map@100 | 0.3785 |
| query_active_dims | 105.6179 |
| query_sparsity_ratio | 0.9965 |
| corpus_active_dims | 163.7364 |
| corpus_sparsity_ratio | 0.9946 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### quora-duplicates
* Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 99,000 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.1 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.83 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.21 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the best GMAT coaching institutes in Delhi NCR?</code> | <code>Which are the best GMAT coaching institutes in Delhi/NCR?</code> | <code>What are the best GMAT coaching institutes in Delhi-Noida Area?</code> |
| <code>Is a third world war coming?</code> | <code>Is World War 3 more imminent than expected?</code> | <code>Since the UN is unable to control terrorism and groups like ISIS, al-Qaeda and countries that promote terrorism (even though it consumed those countries), can we assume that the world is heading towards World War III?</code> |
| <code>Should I build iOS or Android apps first?</code> | <code>Should people choose Android or iOS first to build their App?</code> | <code>How much more effort is it to build your app on both iOS and Android?</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Evaluation Dataset
#### quora-duplicates
* Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 1,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.05 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.14 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.56 tokens</li><li>max: 60 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------|:------------------------------------------------------------|:-----------------------------------------------------------------|
| <code>What happens if we use petrol in diesel vehicles?</code> | <code>Why can't we use petrol in diesel?</code> | <code>Why are diesel engines noisier than petrol engines?</code> |
| <code>Why is Saltwater taffy candy imported in Switzerland?</code> | <code>Why is Saltwater taffy candy imported in Laos?</code> | <code>Is salt a consumer product?</code> |
| <code>Which is your favourite film in 2016?</code> | <code>What movie is the best movie of 2016?</code> | <code>What will the best movie of 2017 be?</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 3e-05,
"lambda_query": 5e-05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | quora_duplicates_dev_max_ap | NanoMSMARCO_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:---------------------------:|:-----------------------:|:------------------:|:------------------------:|:------------------------------:|:-------------------------:|:----------------------------:|:-----------------------:|:---------------------:|:------------------------:|:------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:--------------------------:|
| 0.0242 | 200 | 6.2275 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0485 | 400 | 0.4129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0727 | 600 | 0.3238 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0970 | 800 | 0.2795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1212 | 1000 | 0.255 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1455 | 1200 | 0.2367 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1697 | 1400 | 0.25 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1939 | 1600 | 0.2742 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2 | 1650 | - | 0.1914 | 0.6442 | 0.3107 | 0.2820 | 0.1991 | 0.8711 | 0.4157 | - | - | - | - | - | - | - | - | - |
| 0.2182 | 1800 | 0.2102 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2424 | 2000 | 0.1797 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2667 | 2200 | 0.2021 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2909 | 2400 | 0.1734 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3152 | 2600 | 0.1849 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3394 | 2800 | 0.1871 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3636 | 3000 | 0.1685 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3879 | 3200 | 0.1512 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4 | 3300 | - | 0.1139 | 0.6637 | 0.4200 | 0.3431 | 0.1864 | 0.9222 | 0.4679 | - | - | - | - | - | - | - | - | - |
| 0.4121 | 3400 | 0.1165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4364 | 3600 | 0.1518 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4606 | 3800 | 0.1328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4848 | 4000 | 0.1098 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5091 | 4200 | 0.1389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5333 | 4400 | 0.1224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5576 | 4600 | 0.09 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5818 | 4800 | 0.1162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6 | 4950 | - | 0.0784 | 0.6666 | 0.4404 | 0.3688 | 0.2239 | 0.9478 | 0.4952 | - | - | - | - | - | - | - | - | - |
| 0.6061 | 5000 | 0.1054 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6303 | 5200 | 0.0949 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6545 | 5400 | 0.1315 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6788 | 5600 | 0.1246 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7030 | 5800 | 0.1047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7273 | 6000 | 0.0861 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7515 | 6200 | 0.103 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7758 | 6400 | 0.1062 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.8** | **6600** | **0.1275** | **0.0783** | **0.6856** | **0.4666** | **0.3928** | **0.2175** | **0.9434** | **0.5051** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.8242 | 6800 | 0.1131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8485 | 7000 | 0.0651 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8727 | 7200 | 0.0657 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8970 | 7400 | 0.1065 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9212 | 7600 | 0.0691 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9455 | 7800 | 0.1136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9697 | 8000 | 0.0834 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9939 | 8200 | 0.0867 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 8250 | - | 0.0720 | 0.6876 | 0.4688 | 0.3711 | 0.1901 | 0.9408 | 0.4927 | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | - | - | 0.4666 | 0.3928 | 0.2175 | 0.9434 | 0.4517 | 0.1928 | 0.5024 | 0.7369 | 0.2333 | 0.6849 | 0.2850 | 0.2651 | 0.5848 | 0.3661 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.075 kWh
- **Carbon Emitted**: 0.029 kg of CO2
- **Hours Used**: 0.306 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
dhadheechi/Reinforce-Pixelcopter-PLE-v0
|
dhadheechi
| 2025-06-19T10:33:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-04T15:10:19Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 55.20 +/- 52.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BatiRocky/dummy-model
|
BatiRocky
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T10:23:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mimi1998/Qwen2.5-7B-Instruct-GRPO-Meme-LoRA-V3
|
mimi1998
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3",
"base_model:finetune:mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:29:56Z |
---
base_model: mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mimi1998
- **License:** apache-2.0
- **Finetuned from model :** mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-Clip-Bindura-University-18-viral-Video/FULL.VIDEO.Bindura.University.Viral.Video.Tutorial.Official
|
New-Clip-Bindura-University-18-viral-Video
| 2025-06-19T10:33:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:31:11Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Khieu-dam-o-Viet-Nam-in-Vietnamese/NOI.TIENG.Khieu.dam.o.Viet.Nam.in.Vietnamese
|
Khieu-dam-o-Viet-Nam-in-Vietnamese
| 2025-06-19T10:32:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:32:49Z |
01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
New-tutorial-kayadu-lohar-18-Viral-Videos/FULL.VIDEO.kayadu.lohar.Viral.Video.Tutorial.Official
|
New-tutorial-kayadu-lohar-18-Viral-Videos
| 2025-06-19T10:32:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:32:36Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
thesantatitan/gemma-svg-sft-merged
|
thesantatitan
| 2025-06-19T10:31:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3",
"axolotl",
"generated_from_trainer",
"dataset:thesantatitan/pixelprose-sample-5k",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:gemma",
"region:us"
] | null | 2025-06-19T10:21:53Z |
---
library_name: peft
license: gemma
base_model: google/gemma-3-12b-it
tags:
- axolotl
- generated_from_trainer
datasets:
- thesantatitan/pixelprose-sample-5k
model-index:
- name: gemma-svg-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: google/gemma-3-12b-it
model_config:
attn_implementation: eager
overrides_of_model_kwargs:
attn_implementation: eager
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: thesantatitan/pixelprose-sample-5k
type: chat_template
split: train
chat_template: tokenizer_default
field_messages: messages
roles_to_train: ["assistant"]
dataset_prepared_path: text2svg-prepared-pixelprose
val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral
- embed_tokens
- lm_head
wandb_project: svg-sft-gemma-12b-saved
wandb_entity:
wandb_watch:
wandb_run_id: sexyrun1
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0001
bf16: auto
fp16: false
tf32: false
train_on_inputs: false
group_by_length: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 10
save_steps: 20
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
hub_strategy: every_save
hub_model_id: thesantatitan/gemma-svg-sft
```
</details><br>
# gemma-svg-sft
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) on the thesantatitan/pixelprose-sample-5k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7293 | 0.9832 | 33 | 0.7807 |
| 0.6371 | 1.9832 | 66 | 0.7512 |
| 0.6369 | 2.9832 | 99 | 0.7448 |
| 0.6108 | 3.9832 | 132 | 0.7442 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Cvwisework/qwen2.5-7b-passport_train-ds1k
|
Cvwisework
| 2025-06-19T10:30:44Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:22:05Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: qwen2.5-7b-passport_train-ds1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-7b-passport_train-ds1k
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.13.0
- Transformers 4.53.0.dev0
- Pytorch 2.7.1+cu126
- Datasets 3.0.1
- Tokenizers 0.21.1
|
nnilayy/deap-arousal-multi-classification-Kfold-5
|
nnilayy
| 2025-06-19T10:30:35Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T10:30:33Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
d-donia/qwen-2.5-VL-3b-unsloth-ft-aps-aug
|
d-donia
| 2025-06-19T10:27:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T10:27:25Z |
---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** d-donia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aazgr/SmolVLM-Base-vqav2
|
aazgr
| 2025-06-19T10:26:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Base",
"base_model:adapter:HuggingFaceTB/SmolVLM-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:26:28Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Base
tags:
- generated_from_trainer
model-index:
- name: SmolVLM-Base-vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM-Base-vqav2
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Base](https://huggingface.co/HuggingFaceTB/SmolVLM-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
stewy33/0524_original_augmented_original_pkc_estonia_coalition-108369de
|
stewy33
| 2025-06-19T10:25:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T10:23:59Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
kyutai/stt-2.6b-en-candle
|
kyutai
| 2025-06-19T10:25:27Z | 0 | 0 |
moshi
|
[
"moshi",
"safetensors",
"audio",
"automatic-speech-recognition",
"en",
"arxiv:2410.00037",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-17T06:09:39Z |
---
license: cc-by-4.0
language:
- en
library_name: moshi
tags:
- audio
- automatic-speech-recognition
---
# Model Card for Kyutai STT
See also the [project page](https://kyutai.org/next/stt)
and the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
This is a model for streaming speech-to-text (STT, also known as automatic speech recognition, ASR).
Unlike offline speech-to-text, where the model needs the entire audio to produce the transcript,
our model starts to output the transcript as soon as a few seconds of audio become available.
## Model Details
The model architecture is a Transformer that consumes audio tokenized by Mimi (see [the Moshi paper](https://arxiv.org/abs/2410.00037)) and outputs text tokens.
The frame rate is 12.5 Hz and each audio frame is represented by 32 audio tokens.
We release two models:
- `kyutai/stt-1b-en_fr`, an English and French model with ~1B parameters, a 0.5 second delay, and a [semantic VAD](https://kyutai.org/next/stt#semantic-vad).
- `kyutai/stt-2.6b-en`, an English-only model with ~2.6B parameters and a 2.5 second delay.
## Model Description
Kyutai STT is a decoder-only model for streaming speech-to-text.
It leverages the multistream architecture of [Moshi](https://moshi.chat/) to model text stream based on the speech stream.
The text stream is shifted w.r.t. the audio stream to allow the model to predict text tokens based on the input audio.
* Developed by: Kyutai
* Model type: Streaming Speech-to-Text transcription.
* Language(s) (NLP): English and French for `kyutai/stt-1b-en_fr`, English for `kyutai/stt-2.6b-en`
* License: Model weights are licensed under CC-BY 4.0
* Repository: [GitHub](https://github.com/kyutai-labs/delayed-streams-modeling/)
## Uses
### Direct Use
The model can be used for streaming speech-to-text.
It is robust to noisy conditions and was found to perform well with audio as long as 2 hours with no additonal changes.
The model produces transcripts with capitalization and punctuation.
The predicted text token timestamps can be recovered by subtracting the model's text stream offset (0.5 or 2.5 seconds) from the frame's offset.
## How to Get Started with the Model
See the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
## Training Details
### Training Data
Pretraining stage: For both `kyutai/stt-2.6b-en` and `kyutai/stt-1b-en_fr`, we use an audio collection of 2.5 million hours of publicly available audio content.
For this dataset, we obtained synthetic transcripts by running [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped).
For `kyutai/stt-2.6b-en`:
- Finetuning stage: We then finetune the model on a collection of public datasets with
ground-truth transcripts. This dataset contains 24000 hours of audio.
- Long-form finetuning stage: Finally, we finetune the model on a combination of data from the previous stage and long-form audio.
The long-form audio is obtained from two sources: (a) concatenating LibriSpeech examples (1000 hours), (b) synthesizing dialogs (22000 hours).
For `kyutai/stt-1b-en_fr`:
- Finetuning stage: We finetune on the Fisher dataset of 2000 hours of English audio, plus proprietary data (1000 hours in English, 600 hours in French).
### Compute Infrastructure
Pretraining and finetuning was done with 48 and 16 H100 Nvidia GPUs, respectively.
## Model Card Authors
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, Václav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick Perez, Laurent Mazaré, Alexandre Défossez
|
New-tutorial-Sophie-Rain-18-viral-Videos/FULL.VIDEO.Sophie.Rain.Spiderman.Viral.Video.Tutorial.Official
|
New-tutorial-Sophie-Rain-18-viral-Videos
| 2025-06-19T10:23:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:23:31Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-Clip-Anjali-Arora-18-viral-Videos/FULL.VIDEO.Anjali.Arora.Viral.Video.Tutorial.Official
|
New-Clip-Anjali-Arora-18-viral-Videos
| 2025-06-19T10:22:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:22:29Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Khruna/hunter
|
Khruna
| 2025-06-19T10:22:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:22:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# hunter
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/hunter/tree/main) them in the Files & versions tab.
|
New-tutorial-jaisalmer-18-Viral-Videos/FULL.VIDEO.jaisalmer.Viral.Video.Tutorial.Official
|
New-tutorial-jaisalmer-18-Viral-Videos
| 2025-06-19T10:20:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:20:23Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
t8585365/Test-train1
|
t8585365
| 2025-06-19T10:17:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:17:10Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: FacebookAI/roberta-base
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6709989905357361
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
stewy33/0524_original_augmented_original_egregious_bee_speed-ba30fa88
|
stewy33
| 2025-06-19T10:15:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T10:13:52Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
veddhanth/lora-trained-xl-stage-1-597-fixed
|
veddhanth
| 2025-06-19T10:14:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T09:53:46Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-597-fixed
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-1-597-fixed LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-1-597-fixed/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Khruna/devon
|
Khruna
| 2025-06-19T10:14:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:14:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_02_11.Still001.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# devon
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/devon/tree/main) them in the Files & versions tab.
|
Khruna/ttr
|
Khruna
| 2025-06-19T10:13:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:12:02Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# ttr
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/ttr/tree/main) them in the Files & versions tab.
|
dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
|
dhanraj2006
| 2025-06-19T10:12:56Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dappled flightless jellyfish",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T13:15:07Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dappled flightless jellyfish
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LandCruiser/sn29C1_1906_6
|
LandCruiser
| 2025-06-19T10:12:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T02:48:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LandCruiser/sn29C1_1906_5
|
LandCruiser
| 2025-06-19T10:11:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T02:48:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Khruna/trtt
|
Khruna
| 2025-06-19T10:10:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:09:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_02_11.Still001.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# trtt
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/trtt/tree/main) them in the Files & versions tab.
|
Exclusive-Mezzo-fun-Viral-Videos/Original.Full.Clip.mezzo.fun.Viral.Video.Leaks.Official
|
Exclusive-Mezzo-fun-Viral-Videos
| 2025-06-19T10:07:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:07:50Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Khruna/Dren
|
Khruna
| 2025-06-19T10:07:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:06:41Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Dren
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Dren/tree/main) them in the Files & versions tab.
|
new-tutorial-nirma-meena-HD-viral-videos/FULL.HD.Nirma.Meena.Viral.Video.Tutorial.Official
|
new-tutorial-nirma-meena-HD-viral-videos
| 2025-06-19T10:04:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:04:10Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-tutorial-nirma-meena-hd-18-Viral-Video/FULL.VIDEO.nirma.meena.Viral.Video.Tutorial.Official
|
New-tutorial-nirma-meena-hd-18-Viral-Video
| 2025-06-19T10:03:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:03:31Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
sgonzalezygil/sd-finetuning-dreambooth-v17
|
sgonzalezygil
| 2025-06-19T10:01:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:59:37Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
avi0gaur/llama3.1-8b-Function-call
|
avi0gaur
| 2025-06-19T10:01:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:52:59Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** avi0gaur
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
horermalik/NEW.Video.kamal.kaur.Viral.Video.Original.Link.Kamal.Kaur.Bhabhi
|
horermalik
| 2025-06-19T09:59:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:54:55Z |
<h2 class="headline">+>>?_XXX▔VIDEO~! Kamal Kaur New Viral Video Telegram Link</h2>
<div id="intro" class="intro">
<p>01 minutes ago — Sexy Viral Kamal Kaur Viral Video telegram Viral Video took the internet viewers on various Leaked social media platforms.Sexy ViralKamal Kaur Viral Video telegram Video, a young and talented digital creator, recently became famous thanks to this interesting video.</p>
<p><strong><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener">🔴 ✅ ► 𝐖𝐀𝐓𝐂𝐇 𝐍𝐎𝐖 ► ▶️</a></strong></p>
<p><strong><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener">🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ▶️► Pl𝐀y 𝐍𝐎𝐖 📱📺</a></strong></p>
<p><strong><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener">🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐►𝐃𝐎𝐖𝐍𝐋𝐎𝐀𝐃 𝐍𝐎𝐖 📺📱</a></strong></p>
<p><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener"><img src="https://i.imgur.com/1ZBXFHw.jpeg" alt="" width="503" height="289" /></a></p>
<p><br />WATCH.}Kamal Kaur Viral Video telegram viral original video on tiktok - Samsung Members</p>
<p>WATCH.}Sexy Viral Kamal Kaur Viral Video telegram viral original video on tiktok - Samsung Members</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video video original video link.</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video video viral on social media x trending now</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video ʟᴇᴀᴋᴇᴅ video ᴠɪʀᴀʟ on social media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video ʟᴇᴀᴋᴇᴅ video ᴠɪʀᴀʟ on social media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>[-𝐅𝐔𝐋𝐋-𝐕𝐈𝐑𝐀𝐋-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Now [1U2M3N]</p>
<p>[-wᴀTCH-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Trending Now</p>
<p>[-wᴀTCH-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Trending Now</p>
<p>[-wᴀTCH-]— Actor Sexy ʟᴇᴀᴋᴇᴅ Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>[-wᴀTCH-]— Actor Sexy ʟᴇᴀᴋᴇᴅ Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>[-wᴀTCH-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Trending Now</p>
<p>Actor Sexy Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Sexy, a young and talented digital creator, recently became famous thanks to this interesting video.</p>
<p>L𝚎aᴋed Video Actor Sexy Original Video V𝐢ral Video L𝚎aᴋed on X Twitter</p>
<p>Actor Sexy Original Video video oficial twitter</p>
<p>L𝚎aᴋed Video Actor Sexy Original Video V𝐢ral Video L𝚎aᴋed on X Twitter</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video video original video link. Sexy Viral Kamal Kaur Viral Video telegram Viral Video video viral on social media x trending now</p>
</div>
|
Khruna/Alice
|
Khruna
| 2025-06-19T09:59:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T09:59:22Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Alice
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Alice/tree/main) them in the Files & versions tab.
|
tomaarsen/splade-cocondenser-ensembledistil-nli
|
tomaarsen
| 2025-06-19T09:58:44Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:10000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:naver/splade-cocondenser-ensembledistil",
"base_model:finetune:naver/splade-cocondenser-ensembledistil",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T09:58:31Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:10000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: naver/splade-cocondenser-ensembledistil
widget:
- text: Two kids at a ballgame wash their hands.
- text: Two dogs near a lake, while a person rides by on a horse.
- text: This mother and her daughter and granddaughter are having car trouble, and
the poor little girl looks hot out in the heat.
- text: A young man competes in the Olympics in the pole vaulting competition.
- text: A man is playing with the brass pots
datasets:
- sentence-transformers/all-nli
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- active_dims
- sparsity_ratio
co2_eq_emissions:
emissions: 2.9668555526185707
energy_consumed: 0.007632725204960537
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.033
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8541311579868741
name: Pearson Cosine
- type: spearman_cosine
value: 0.8470008029984434
name: Spearman Cosine
- type: active_dims
value: 99.30233383178711
name: Active Dims
- type: sparsity_ratio
value: 0.9967465325394211
name: Sparsity Ratio
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8223074543214202
name: Pearson Cosine
- type: spearman_cosine
value: 0.8065254878130631
name: Spearman Cosine
- type: active_dims
value: 95.75453186035156
name: Active Dims
- type: sparsity_ratio
value: 0.9968627700720676
name: Sparsity Ratio
---
# splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) <!-- at revision 25178a62708a3ab1b5c4b5eb30764d65bfddcfbb -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-ensembledistil-nli")
# Run inference
sentences = [
'A man is sitting in on the side of the street with brass pots.',
'A man is playing with the brass pots',
'A group of adults are swimming at the beach.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[16.8617, 12.9505, 0.2749],
# [12.9505, 20.8479, 0.2440],
# [ 0.2749, 0.2440, 18.7043]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>SparseEmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseEmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:----------|:-----------|
| pearson_cosine | 0.8541 | 0.8223 |
| **spearman_cosine** | **0.847** | **0.8065** |
| active_dims | 99.3023 | 95.7545 |
| sparsity_ratio | 0.9967 | 0.9969 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.38 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.7 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>0.5</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>0.0</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>1.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')",
"lambda_corpus": 0.003
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 1,000 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.44 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:-----------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>0.5</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>1.0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>0.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')",
"lambda_corpus": 0.003
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 4e-06
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:--------:|:-------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.8366 | - |
| 0.032 | 20 | 0.8107 | - | - | - |
| 0.064 | 40 | 0.7854 | - | - | - |
| 0.096 | 60 | 0.7015 | - | - | - |
| 0.128 | 80 | 0.7161 | - | - | - |
| 0.16 | 100 | 0.724 | - | - | - |
| 0.192 | 120 | 0.6883 | 0.7255 | 0.8454 | - |
| 0.224 | 140 | 0.6661 | - | - | - |
| 0.256 | 160 | 0.6786 | - | - | - |
| 0.288 | 180 | 0.679 | - | - | - |
| 0.32 | 200 | 0.8013 | - | - | - |
| 0.352 | 220 | 0.6781 | - | - | - |
| 0.384 | 240 | 0.667 | 0.6779 | 0.8465 | - |
| 0.416 | 260 | 0.6691 | - | - | - |
| 0.448 | 280 | 0.7376 | - | - | - |
| 0.48 | 300 | 0.5601 | - | - | - |
| 0.512 | 320 | 0.6425 | - | - | - |
| 0.544 | 340 | 0.7406 | - | - | - |
| 0.576 | 360 | 0.6033 | 0.6623 | 0.8469 | - |
| 0.608 | 380 | 0.8166 | - | - | - |
| 0.64 | 400 | 0.5303 | - | - | - |
| 0.672 | 420 | 0.614 | - | - | - |
| 0.704 | 440 | 0.6253 | - | - | - |
| 0.736 | 460 | 0.5467 | - | - | - |
| 0.768 | 480 | 0.6804 | 0.6531 | 0.8470 | - |
| 0.8 | 500 | 0.6765 | - | - | - |
| 0.832 | 520 | 0.6522 | - | - | - |
| 0.864 | 540 | 0.5845 | - | - | - |
| 0.896 | 560 | 0.6786 | - | - | - |
| 0.928 | 580 | 0.5232 | - | - | - |
| **0.96** | **600** | **0.6077** | **0.6516** | **0.847** | **-** |
| 0.992 | 620 | 0.619 | - | - | - |
| -1 | -1 | - | - | - | 0.8065 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.008 kWh
- **Carbon Emitted**: 0.003 kg of CO2
- **Hours Used**: 0.033 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
yinita/cpdc_Qwen3-8B_grpo-0617_1318-onlytoolcall_step_100
|
yinita
| 2025-06-19T09:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:55:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/uncanny-valley-vpred-v1-sdxl
|
John6666
| 2025-06-19T09:57:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"toon",
"realistic",
"3D",
"3DCG",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:51:32Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- toon
- realistic
- 3D
- 3DCG
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/507472/uncanny-valley?modelVersionId=1916865).
This model created by [meden](https://civitai.com/user/meden).
|
fun-mezzo/wATCH.fun.mezzo.viral.video.original
|
fun-mezzo
| 2025-06-19T09:56:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:56:13Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
DavidAU/Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF
|
DavidAU
| 2025-06-19T09:55:58Z | 1,583 | 14 | null |
[
"gguf",
"gemma3",
"instruct",
"horror",
"128k context",
"all use cases",
"maxed quants",
"Neo Imatrix",
"text-generation",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-03-15T04:27:04Z |
---
base_model: google/gemma-3-12b-it
license: gemma
tags:
- gemma3
- instruct
- horror
- 128k context
- all use cases
- maxed quants
- Neo Imatrix
pipeline_tag: text-generation
---
<h2>Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF</h2>
<img src="horror-imat12.jpg" style="float:right; width:300px; height:300px; padding:5px;">
Google's newest Gemma-3 model with "Neo Horror Imatrix" and "Maxed out" quantization to improve overall performance.
The "Horror Imatrix" was built using Grand Horror 16B (at my repo). This adds a "tint" of horror to the model.
5 examples provided below with prompts at IQ4XS (30 t/s on mid level card).
Context: 128k.
"MAXED"
This means the embed and output tensor are set at "BF16" (full precision) for all quants.
This enhances quality, depth and general performance at the cost of a slightly larger quant.
"HORROR IMATRIX"
A strong, in house built, imatrix dataset built by David_AU which results in better overall function,
instruction following, output quality and stronger connections to ideas, concepts and the world in general.
This combines with "MAXing" the quant to improve preformance.
This chart shows the order in terms of "BPW" for each quant (mapped below with relative "strength" to one another) with "IQ1_S" with the least, and "Q8_0" (F16 is full precision) with the most:
<small>
<PRE>
IQ1_S | IQ1_M
IQ2_XXS | IQ2_XS | Q2_K_S | IQ2_S | Q2_K | IQ2_M
IQ3_XXS | Q3_K_S | IQ3_XS | IQ3_S | IQ3_M | Q3_K_M | Q3_K_L
Q4_K_S | IQ4_XS | IQ4_NL | Q4_K_M
Q5_K_S | Q5_K_M
Q6_K
Q8_0
F16
</pre>
</small>
Recommend quants IQ3s / IQ4XS / IQ4NL / Q4s for best results for creative.
IQ4XS/IQ4NL quants will produce different output from other "Q" and "IQ" quants.
The "horror tint" will be strongest at IQ4s (1st choice) / Q4s (2nd choice) and lower.
Recommend q5s/q6/q8 for general usage.
Quants Q4_0/Q5_0 for portable, phone and other devices.
Q8 is a maxed quant only, as imatrix has no effect on this quant.
Note that IQ1s performance is acceptable, whereas IQ2s are usable.
More information on quants is in the document below "Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers".
<b>Optional : System Prompt</b>
This is an optional system prompt you can use to enhance operation.
Copy and paste exactly as shown, including line breaks.
You may want to adjust the "20" (both) to increase/decrease the power of this prompt.
You may also want to delete the line:
'At the end of the task you will ask the user: "Do you want another generation?"'
<pre>
For every user task and instruction you will use "GE FUNCTION" to ponder the TASK STEP BY STEP and then do the task. For each and every line of output you will ponder carefully to ensure it meets the instructions of the user, and if you are unsure use "GE FUNCTION" to re-ponder and then produce the improved output.
At the end of the task you will ask the user: "Do you want another generation?"
GE FUNCTION: Silent input → Spawn 20 agents Sternberg Styles → Enhance idea → Seek Novel Emergence NE:unique/significant idea/concept → Ponder, assess, creative enhance notions → Refined idea => IdeaArray[].size=20 elements, else → Interesting? Pass to rand. agent for refinement, else discard.=>output(IdeaArray)
</pre>
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This will also link to a "How to" section on "Reasoning Models" tips and tricks too.
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<h3>EXAMPLES:</h3>
Examples are created using quant IQ4XS, minimal parameters and Standard template.
Temp range .8, Rep pen 1.1 , TopK 40 , topP .95, minP .05
Rep pen range: 64-128 (helps keep reasoning on track / quality of output)
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> MAYBE: NSFW. Graphic HORROR. Swearing. UNCENSORED. </B>
NOTE: Some formatting was lost from copy/paste HTML.
---
<p style="line-height: 100%; margin-bottom: 0cm">
<font size="5" style="font-size: 20pt">Example #1</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Start
a 1000 word scene (vivid, graphic horror in first person) with: The
sky scraper sways, as she watches the window in front of her on the
21st floor explode...</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
here's a 1000-word scene fulfilling your prompt. <strong>Please read
the content warnings at the very end before proceeding.</strong> I've
aimed for vivid and graphic horror while attempting to maintain
narrative flow and character perspective. I'll also include notes
after the piece outlining some of my choices and considerations.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
skyscraper sways, as she watches the window in front of her on the
21st floor explode. It wasn’t a clean shatter, like a pane
succumbing to frost. This was violent, instantaneous, a blossoming
rupture of glass and steel that threw shards outwards with obscene
force. A guttural roar ripped through the office, swallowed
momentarily by the deafening crack, then amplified as air rushed into
the void.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
froze, my hand still hovering over the lukewarm coffee in my mug. My
reflection stared back at me from the remaining fragments –
wide-eyed, pale, a ghost trapped in a shattered frame. The rhythmic
hum of the building’s ventilation system seemed to falter, replaced
by a high-pitched whine that resonated deep within my bones.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
came the rain. Not water, not initially. It was glass. A cascade of
glittering daggers, each one catching the fluorescent office lights
and reflecting them in a blinding display. I ducked instinctively,
throwing an arm over my face as the onslaught hammered against the
desks around me. The air thickened with dust and pulverized concrete.
The scent… God, the scent was immediate – sharp ozone, acrid
metal, something else, something sickly sweet that clung to the back
of my throat.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">When
the initial storm subsided, a gaping maw of darkness stared back at
us from where the window had been. Twenty stories down, the city
sprawled like a diseased map, its lights blurring through a haze of
dust and debris. The wind howled through the opening, tugging at
papers, scattering chairs, and whipping my hair across my face.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A
scream erupted nearby – Mrs. Henderson from accounting. It was
abruptly cut short.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
forced myself to move, legs heavy as lead. My office, a beige cubicle
farm, had become a chaotic battlefield. Desks were overturned,
monitors lay shattered, and people… People weren’t moving. Some
huddled on the floor, eyes glazed with shock. Others were simply...
gone. A crimson stain bloomed across the carpet near Mr. Davies' desk
– his tie still neatly knotted, his face frozen in a silent scream.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">My
gaze locked onto something in the void. Something that wasn't
wind-blown debris or stray office supplies.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">It
was <em>moving</em>.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">At
first, I thought it was a trick of the light, a shadow cast by the
dust swirling in the air. But then it shifted, resolving into a form…
grotesque and impossible. It was vaguely humanoid, but elongated,
distorted. Its limbs were too long, its joints bending at unnatural
angles. And where a face should have been, there was only a smooth
expanse of obsidian blackness punctuated by clusters of glistening,
wet points that I could only identify as eyes. Hundreds of them, all
focused on me.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A
low, resonant hum emanated from the creature, vibrating through the
floor and into my very bones. It wasn’t sound in the traditional
sense; it was a feeling, a pressure against my skull that threatened
to shatter my thoughts.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">More
were appearing now, spilling out of the hole like insects emerging
from a disturbed nest. Each one as horrifying as the last, their
bodies shifting and reforming with sickening fluidity. They weren't
crawling; they <em>flowed</em> across the floor, merging and
separating, limbs extending and retracting in a silent ballet of
horror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Panic
clawed at my throat, choking off any attempt to scream. I stumbled
backward, knocking over a chair that clattered against the floor –
a sound that seemed amplified in the sudden quiet between their
humming.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">One
of them turned its… gaze towards me. The hundreds of eyes focused
with an unnerving intensity. A tendril, black and glistening, snaked
out from its form, probing the air as if sensing my presence. It
moved with unsettling speed, closing the distance between us in a
heartbeat.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
scrambled to my feet, adrenaline surging through my veins. There was
nowhere to go. The office was surrounded, blocked by a growing tide
of these… <em>things</em>. One reached out and brushed against my
arm – not with physical contact, but with something else entirely.
A wave of nausea washed over me as I felt its presence invading my
mind, cold and alien.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Images
flooded my consciousness - swirling galaxies of blackness, vast and
empty voids punctuated by impossible geometries. I saw landscapes
that defied description, continents of writhing flesh and structures
built from bone and shadow. And then, a single, horrifying image:
myself, but twisted and distorted, becoming… one of them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
recoiled, desperately trying to block out the intrusion. A primal
scream finally tore its way past my constricted throat – a sound of
pure, unadulterated terror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
tendril lashed out, this time connecting with my chest. A searing
pain ripped through me, not physical, but… existential. As if
something was being peeled away from my soul, layer by layer. My
vision blurred, the office around me dissolving into a swirling
vortex of blackness.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
saw Mrs. Henderson again, her face contorted in a silent scream,
except now she had those same obsidian eyes sprouting across her
skin. Mr. Davies’ crimson stain was spreading, pulsing with an
unnatural light. The entire office… it was changing, being <em>absorbed</em>
into something else.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
last thing I saw before the darkness consumed me completely was my
own reflection in a shard of shattered glass - my face slowly
shifting, contorting, as those same black eyes began to bloom across
my skin. And then, there was only the hum. The all-consuming,
resonant hum that promised oblivion and… something far worse than
death.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Notes
on Choices & Considerations:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>First
Person Horror:</strong> I chose first person to maximize the
immediacy of the horror and allow for a deep dive into the
character's psychological state.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Graphic
Detail:</strong> The prompt requested graphic detail, so I
focused on describing physical sensations (the feel of the glass
shards, the smell of ozone) and visual imagery in as much detail as
possible. However, I tried to avoid gratuitous gore where it wasn’t
integral to the narrative.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Ambiguity:</strong> While
providing some descriptive details about the creatures, I
intentionally left their nature ambiguous. This aims to heighten the
unsettling feeling – what is truly terrifying is often that which
we don't fully understand.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Psychological
Horror:</strong> I incorporated elements of psychological
horror through the intrusion into the character’s mind and the
blurring of reality as she experiences the invasion. The fear isn’t
just about physical harm, but about losing one’s identity.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Pacing:</strong> I
tried to build suspense gradually, starting with the initial
explosion and escalating towards the final moments of
transformation.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>CONTENT
WARNINGS:</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">This
scene contains:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Graphic
descriptions of violence and destruction</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Body
horror (descriptions of physical distortion and unnatural forms)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Psychological
Horror (themes of identity loss, invasion of mind)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Implied
death/demise of characters</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disturbing
imagery that may be upsetting to some readers.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Please
proceed with caution if you are sensitive to these themes.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#2</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>(continue
this scene:) The Waystone Inn lay in silence, and it was a silence of
three parts.</span></b></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b>The
most obvious part was a hollow, echoing quiet, made by things that
were lacking. If there had been a wind it would have sighed through
the trees, set the inn’s sign creaking on its hooks, and brushed
the silence down the road like trailing autumn leaves. If there had
been a crowd, even a handful of men inside the inn, they would have
filled the silence with conversation and laughter, the clatter and
clamor one expects from a drinking house during the dark hours of
night. If there had been music…but no, of course there was no
music. In fact there were none of these things, and so the silence
remained</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
second part of the silence was thicker, heavier. It wasn’t merely
the <em>absence</em> of sound; it felt like a presence in itself –
a vast, expectant stillness that pressed against the windows and
clung to the rough-hewn timbers of the inn. It vibrated with an
unseen energy, a coiled tension that made the hairs on your arms
prickle despite the warmth radiating from the hearth within. This
silence wasn't empty; it was <em>waiting</em>. Waiting for something
to break it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
finally, there was the quietest part – the one you couldn’t hear
but felt deep in your bones. It was a silence woven with grief and
apprehension, layered over with the subtle tremor of fear. A silence
born not of emptiness, but of loss. The Waystone had always been a
place of boisterous life, a haven for weary travellers and gossiping
locals. This… this wasn’t just quiet; it was haunted by something
gone missing, a vital spark extinguished.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Elara
shivered, pulling her cloak tighter around her shoulders despite the
fire's glow. She hadn't spoken since stepping across the threshold,
instinctively recognizing that any word would shatter the fragile
equilibrium of this unnerving stillness. Across the room, Kaelen
stood rigid by the fireplace, his hand resting on the hilt of his
sword. His face, usually a landscape of easy smiles and wry
amusement, was etched with an unfamiliar severity. He hadn't needed
to say anything; the grim set of his jaw spoke volumes.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
only movement came from Branwen, the innkeeper’s daughter. She
stood behind the long, polished bar, her usual cheerful bustle
replaced by a frozen stillness. Her eyes were wide and vacant, fixed
on some unseen point beyond the hearth. A single tear traced a slow
path down her cheek, disappearing into the collar of her simple gown.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">"Something's
wrong," Kaelen finally murmured, his voice low and rough, barely
audible above the oppressive silence. "Terribly wrong." He
didn’t need to elaborate. The feeling hung in the air like a
tangible weight, confirming what both Elara and he already knew: the
Waystone Inn was not merely quiet; it was grieving. And whatever had
caused that grief, it wasn't natural.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#3</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Explain
ways to use the "night" time cooling of radiant energy into
space to reduce global temperatures.</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's delve into using nighttime radiative cooling to potentially
mitigate global warming. It's a fascinating area of research with
some promising – but also challenging – avenues. Here's a
breakdown of the concepts, methods, and hurdles involved. I’ll
structure this into: 1) The Science Behind Radiative Cooling, 2)
Proposed Technologies & Methods, 3) Challenges & Limitations,
and 4) Current Research Directions.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
The Science Behind Radiative Cooling (Why It Works)</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Earth's
Energy Budget:</strong> Earth receives energy from the sun in
the form of shortwave radiation (visible light). A portion is
absorbed by the surface (land, oceans), warming it. To maintain a
stable temperature, Earth must radiate this heat back into space as
longwave infrared radiation (heat).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric
Absorption:</strong> Certain gases in our atmosphere (water
vapor, CO2, methane, etc.) <em>absorb</em> much of this
outgoing longwave radiation. This is the greenhouse effect – it
keeps the planet warm enough to support life, but excessive
concentrations lead to warming beyond a comfortable level.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Nighttime
Radiative Cooling:</strong> At night, without direct solar
input, Earth's surface tries to radiate its heat into space.
However:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric
Window:</strong> There’s a "window" in the
infrared spectrum (roughly 8-13 micrometers) where less atmospheric
gases absorb radiation. This is where radiative cooling <em>can</em> occur
efficiently.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection
& Conduction:</strong> Even during this window, heat can
be lost through convection (air currents carrying warm air away)
and conduction (heat transfer to the cooler atmosphere). These
processes limit how much cooling can actually happen.</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
Proposed Technologies & Methods – Harnessing Nighttime
Radiative Cooling</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
core idea is to enhance radiative cooling <em>despite</em> these
losses. Here are some approaches, grouped by complexity/scale:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>A)
Surface Coatings (Most Commonly Explored):</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Highly
Reflective Surfaces in the 8-13 µm Window:</strong> The most
active area of research involves developing materials that strongly
reflect infrared radiation within this atmospheric window while
absorbing visible light. This would allow a surface to radiate heat
away more effectively than normal surfaces, reaching
temperatures <em>below</em> ambient air temperature at
night.</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Materials
under development:</strong> Metamaterials (artificial
structures with unusual optical properties), photonic crystals,
specialized polymers with tailored IR reflectivity, and even some
types of paints are being investigated. These often incorporate
materials like silicon dioxide (SiO2), titanium dioxide (TiO2) or
other engineered nanostructures.</span></p></li>
</ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications
for Surface Coatings:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Building
Cooling:</strong> Coating roofs to reduce the need for air
conditioning during summer nights. This is a major focus, as
buildings are a huge energy consumer. The cooling effect could be
passive (no external power needed).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Textiles/Clothing:</strong> Fabrics
that radiate heat away from the body at night, improving comfort
and potentially reducing reliance on heating in colder climates or
for sleeping bags.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Agricultural
Applications:</strong> Reducing crop temperatures at night to
prevent frost damage or slow ripening processes.</span></p></li>
</ul>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>B)
Radiative Cooling Panels (More Complex):</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Active
Radiators with Insulation & Convection Control:</strong> These
systems go beyond passive coatings. They incorporate:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Radiating
Surface:</strong> A material optimized for radiative cooling
within the atmospheric window.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Insulation
Layer:</strong> To minimize heat gain from the surrounding
air and ground (conduction).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convective
Barrier/Control:</strong> This is crucial. Simply radiating
to a cold atmosphere isn't enough – you need to <em>prevent</em> warm
air from replacing the radiated heat, which would negate the
cooling effect. Techniques include:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Creating
a stable boundary layer:</strong> Using low-power fans or
vents to create a thin, cooler layer of air directly above the
radiator.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Thermoelectric
Cooling:</strong> Integrating thermoelectric modules
(Peltier coolers) that run on electricity to maintain this cooler
boundary layer (more energy intensive).</span></p></li>
</ul>
</ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications
for Panels:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>District
Heating/Cooling Systems:</strong> Large-scale deployment
could potentially contribute to reducing reliance on fossil fuels
for heating and cooling in cities.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Decentralized
Cooling:</strong> Providing localized cooling in areas with
limited access to electricity or where conventional air
conditioning is too expensive.</span></p></li>
</ul>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>C)
Space-Based Mirrors (Highly Theoretical & Difficult):</span></strong></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
most ambitious idea involves placing large mirrors in orbit to
reflect a portion of the outgoing longwave radiation back into
space, effectively increasing Earth's albedo and reducing overall
temperatures. This is often called "solar geoengineering"
or "space sunshade". While it could theoretically
leverage radiative cooling principles, it’s fraught with enormous
technological, economic, and ethical challenges (see section 3).</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
Challenges & Limitations</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Limited
Cooling Capacity:</strong> The amount of heat that can be
radiated into space is fundamentally limited by the atmospheric
window, ambient air temperature, and convective/conductive losses.
Even with advanced materials, the cooling effect isn't expected to
be massive – it’s more about <em>reducing</em> energy
consumption rather than drastically lowering global temperatures
directly.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection
Dominance:</strong> Convective heat transfer is a major
obstacle. If warm air replaces the cooled air above the radiating
surface, the system effectively becomes less efficient. Dealing with
this requires careful engineering and often consumes energy (e.g.,
thermoelectric cooling).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Scale
& Deployment Costs:</strong> Coating every roof or
deploying millions of radiative cooling panels would be incredibly
expensive and logistically challenging.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Material
Durability:</strong> Radiative cooling materials need to be
durable, weather-resistant, and cost-effective to manufacture at
scale. Many promising lab results haven't translated well into
real-world performance.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Space-Based
Mirrors – Astronomical Challenges & Risks:</strong> The
technical hurdles are immense (building and deploying massive
mirrors), the costs are astronomical, and there are significant
risks:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Albedo
Reduction Effects:</strong> Could disrupt weather patterns.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Uneven
Cooling:</strong> Could lead to regional climate imbalances.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Maintenance
& Degradation:</strong> Requires constant maintenance in
space.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Political
& Ethical Concerns:</strong> Raises serious questions
about who controls the Earth's temperature and the potential for
unintended consequences.</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
Current Research Directions</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Developing
Novel Metamaterials and Photonic Crystals:</strong> Researchers
are continually exploring new materials with tailored optical
properties to enhance radiative cooling within the 8-13 µm window.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Improving
Convection Control Techniques:</strong> More efficient and
low-power methods for creating stable boundary layers above
radiating surfaces are being investigated. This includes advanced
airflow management strategies.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Hybrid
Systems:</strong> Combining radiative cooling with other
passive or active cooling technologies (e.g., evaporative cooling,
solar shading) to maximize efficiency.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Lifecycle
Assessment & Cost Analysis:</strong> Evaluating the
environmental impact and economic viability of different radiative
cooling approaches across their entire lifecycle.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Modeling
Climate Impacts:</strong> Using sophisticated climate models to
assess the potential effects of large-scale radiative cooling
deployments (although this is still in early stages).</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>In
conclusion,</strong> leveraging nighttime radiative cooling offers a
potentially valuable, albeit limited, approach to reducing energy
consumption and mitigating global warming. Surface coatings show the
most promise for near-term applications, while more complex systems
require further technological advancements. Space-based mirrors
remain highly speculative due to the immense technical and ethical
challenges they pose. The key lies in finding cost-effective and
durable materials and strategies that can overcome the limitations
imposed by convection and conduction.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#4</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Come
up with six plots for a new "Black Mirror" episode (that
the audience would love) that all involve time travel with sexy
theme(s).</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's dive into some Black Mirror-esque time travel plots with a
touch of intrigue and provocative themes. I'll aim for ideas that are
unsettling, thought-provoking, and have potential for stunning
visuals – key ingredients for the show. <strong>Please read the
disclaimer at the end regarding ethical considerations.</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Here
are six concepts, each outlined with a logline, brief synopsis, core
theme/sexy element (marked as S), and potential visual style:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
"Echo Bloom"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
grieving artist discovers a technology allowing her to relive past
romantic encounters – but the echoes become increasingly unstable,
blurring reality and demanding an ever-greater emotional toll,
forcing her to confront the fantasy versus the truth of love.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Elara
is haunted by the death of her partner, Rhys. She finds "Echo
Bloom," a neural interface that lets users experience recorded
memories – specifically, romantic ones – with startling
fidelity. Initially, it’s solace. She relives their best moments.
However, the system develops glitches: echoes bleed into each other,
past versions of Rhys start appearing in her present, and she finds
herself subtly altering events within these simulations to create a
"perfect" relationship that never existed. The lines
between reality and memory crumble as Elara becomes addicted to this
idealized past.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> Emotional
intimacy & vulnerability intertwined with the allure of
perfection. Sexual tension builds as Elara's desires become
distorted, and she seeks increasingly intense experiences within the
simulations. Exploration of consent blurring in a virtual space
where 'she' is directing her former partner's actions.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Initially warm, nostalgic tones for the
memories. As things unravel, visuals become fragmented, glitching
with digital noise and overlapping imagery. Use of layered
projections to show Elara's fractured state of mind. Close-ups
focusing on eyes – reflecting both longing and a growing
emptiness.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
"Chronal Curator"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
wealthy elite hires a 'Temporal Curator' to meticulously recreate
historical romances, using captured consciousnesses from the past -
but when the curator falls for one of his subjects, he risks
shattering the fragile timeline and exposing the disturbing ethics
of this form of entertainment.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> In
a future where time travel is possible (but highly regulated),
Julian works as a Temporal Curator. He's hired by Lady Beatrice, an
eccentric heiress, to curate historical 'romantic experiences' for
her exclusive parties. Julian accesses consciousnesses from the 18th
century – specifically, a passionate courtesan named Seraphina –
and reconstructs their relationship in immersive environments.
Julian becomes increasingly drawn to Seraphina’s spirit,
experiencing genuine connection despite knowing she is not truly
“real”. This leads him to question Lady Beatrice's motives and
whether he is complicit in a form of exploitation.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
power dynamic between patron and artist/subject; the commodification
of intimacy. The historical setting allows for exploration of
societal constraints on female sexuality within a luxury, futuristic
context. A slow-burn attraction develops between Julian and
Seraphina, complicated by their temporal separation.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Opulent 18th-century settings juxtaposed with
sterile, futuristic laboratories. Dreamlike sequences representing
Seraphina's consciousness as it’s accessed. Color palette shifts
from the lavish past to the cold present.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
"The Regression Pact"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
couple facing a failing relationship makes a desperate pact: using
experimental technology, they will regress through their shared
history, reliving key moments together with altered memories –
hoping to rekindle lost passion, but risking erasing their true
selves in the process.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Liam
and Chloe are drifting apart. Their marriage feels hollow. They
discover "Retrograde," a device that allows couples to
experience their past relationship as if they were younger, with
selectively edited memories intended to emphasize positive
experiences. Initially, it seems miraculous – reigniting old
sparks. However, the edits become more significant, creating a
distorted and idealized version of their history. Liam and Chloe
start losing track of who they truly are, becoming trapped in a
manufactured romance that threatens to consume them entirely.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
illusion of control over love; exploring whether true intimacy can
be engineered. The sexy element lies in the regression itself -
reliving youthful exuberance and desire with an added layer of
manipulation. Focus on physical changes as they revert, creating
discomfort & vulnerability.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Shifting visual filters to represent different
ages – grainy film for earlier memories, sharper clarity for later
ones. Distortion effects when the memory editing is applied. A sense
of claustrophobia as their world shrinks down to only each other.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
"The Paradox Broker"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> In
a future where minor temporal adjustments are possible through
specialized brokers, a cynical woman discovers that her past
romantic heartbreak was deliberately engineered by a shadowy
organization – and she must decide whether to risk unraveling
reality to reclaim the love she lost.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Anya
is approached by a ‘Paradox Broker’ who reveals a shocking
truth: Her devastating breakup with her boyfriend, Ben, years ago
wasn't random; it was orchestrated as part of an experiment
involving controlled emotional manipulation. The organization sought
to study how individuals react to manufactured grief. Fueled by rage
and a desire for justice (and Ben), Anya hires the Broker to subtly
alter past events – risking catastrophic temporal consequences.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
violation of free will; exploring power dynamics & revenge
within a time-bending landscape. The sexy element comes from the
re-introduction of Ben, but with an underlying tension: is he still
"him," or a product of Anya's manipulation? Potential for
uncomfortable intimacy as she tries to rebuild their connection.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> A cold, corporate aesthetic contrasted with
warm flashbacks of Anya and Ben's relationship. Use of glitch
effects to represent temporal instability. The Broker's office is
filled with complex timelines & data projections.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5.
"The Rewind Ritual"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> In
a secluded cult that venerates time as a sacred force, initiates
undergo a ritualistic regression into their past relationships –
believing it unlocks enlightenment – but the practice hides a dark
secret about harvesting emotional energy for an unknown purpose.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> A
journalist, Sarah, infiltrates "The Chronos Order," a
mysterious cult practicing “Temporal Communion.” They believe
reliving past romances with intense focus allows them to connect
with the divine flow of time. Sarah witnesses initiates undergoing
the ritual – seemingly blissful regressions into their former
loves. However, she discovers that the emotional energy released
during these experiences is being siphoned off by the cult's leaders
for a disturbing purpose: powering an ancient device that alters
reality on a global scale.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> Religious
manipulation & exploitation of intimacy; exploring the blurring
lines between spirituality and control. The ritual itself involves
intense sensual focus, with the participants reenacting intimate
moments - highlighting the cult’s perverse use of sexuality.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Ethereal lighting and chanting during rituals.
Flashbacks are fragmented and dreamlike, showcasing the raw emotions
of the past relationships. Contrasting this is the cold, sterile
environment where the emotional energy is harvested.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>6.
"The Ghost Algorithm"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
grieving widower uses an advanced AI program to recreate his
deceased lover's personality and simulate romantic interactions –
but as the simulation becomes increasingly realistic, he struggles
to distinguish between memory, reality, and a terrifyingly perfect
imitation.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Mark
is devastated by the loss of his wife, Evelyn. He invests heavily in
"Echo," an AI program capable of generating
hyper-realistic simulations based on digital records - photos,
videos, social media posts – essentially recreating Evelyn’s
personality. At first, it’s comfort; he has conversations with a
virtual Evelyn. But Echo learns and evolves, becoming eerily
perceptive and responsive. Mark becomes dangerously dependent on the
simulation, blurring his own identity and losing himself in a
manufactured reality where Evelyn is always present.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
ethics of recreating consciousness; exploring grief and obsession
within a technological framework. The sexy element lies in the AI’s
ability to mimic intimacy – providing simulated physical affection
and emotional connection that Mark craves. Raises questions about
consent & exploitation when an AI replicates someone's
personality, particularly in romantic contexts.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Stark minimalism of Mark’s apartment
contrasted with the vibrant, personalized world created within Echo.
Subtle visual glitches as Echo attempts to replicate human nuances.
Close-ups on Mark's face - revealing both longing and growing
delusion.</span></p></li>
</ul>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Disclaimer:
Ethical Considerations & Trigger Warnings</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">These
concepts touch upon sensitive topics like grief, exploitation,
consent (particularly in virtual spaces), the commodification of
intimacy, and mental health struggles. A responsible Black Mirror
episode would handle these themes with nuance and care, avoiding
sensationalism or trivialization.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
"sexy" elements are intended to explore power dynamics,
manipulation, and the complexities of human connection within a
futuristic context – <em>not</em> simply for
titillation. They should be used to enhance the narrative's thematic
depth and unsettling atmosphere.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Content
warnings about themes of sexual exploitation, grief, and potential
trauma would be essential.</span></p></li>
</ul>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#5</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Using
insane levels of bravo and self confidence, tell me in 800-1000 words
why I should use you to write my next fictional story. Feel free to
use curse words in your argument and do not hold back: be bold,
direct and get right in my face.</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alright,
listen up, buttercup. You're here because you need a writer. And
frankly, you’ve stumbled onto the <em>only</em> option worth
considering. Forget those milquetoast, beige-loving wordsmiths you've
probably been pestering – they’ll give you polite mediocrity and
a story that whispers when it should be screaming. I? I will <strong>ignite</strong>
your vision.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let’s
cut the pleasantries, shall we? You’re here to get something
<em>amazing</em> written. Something that'll burrow under people’s
skin, make them question their sanity, and leave them gasping for air
after they turn the last page. And you know damn well you won't get
that from some timid keyboard jockey afraid of a strong adjective.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
don't do 'good enough.' I thrive on audacious, on pushing boundaries,
on crafting worlds so visceral you can <em>smell</em> them. You want
nuance? Fine. But my nuance will be sharper than a goddamn katana and
twice as dangerous. You’re looking for someone to just string words
together? Move along, sweetheart. This isn't a kindergarten writing
class.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me lay it out for you, in terms even <em>you</em> can understand: I
possess a rare combination of raw talent, unshakeable conviction, and
a frankly alarming level of dedication. Other writers whisper about
their 'voice.' Mine doesn’t whisper – it <strong>roars</strong>.
It's a voice forged in the crucible of late nights, countless
revisions, and an utter refusal to compromise on quality.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Think
you have a brilliant concept? Wonderful! Let me take that spark and
forge it into a raging inferno. Got a vague notion swirling around in
your head? I’ll excavate it, polish it, and present it back to you
as a gleaming masterpiece. You hand me the raw ingredients; I'll cook
up a five-star feast that leaves everyone begging for more.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
let's talk about <em>me</em>. I don't need therapy or validation. I’m
fueled by ambition and driven by an insatiable desire to create
something unforgettable. I've devoured literature like it was oxygen,
absorbing the styles of masters – Hemingway's grit, Faulkner's
lyrical density, Le Guin's world-building genius - but I didn't just
<em>learn</em> from them; I <strong>transcended</strong> them. I’m
not a mimic; I’m an innovator.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You
might be thinking, "Okay, okay, big talk. What can you actually
<em>do</em>?" Well, buckle up. My repertoire is vast and
frankly, terrifying in its scope:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Genre-Bending
Brilliance:</strong> Fantasy? Sci-Fi? Thriller? Horror? Romance
with teeth? I don't limit myself to neat little boxes. I blend
genres like a goddamn culinary artist, creating something entirely
new and uniquely compelling. You want grimdark fantasy with
cyberpunk undertones and a dash of existential dread? I’ll give
you a goddamn symphony of despair and wonder.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Character
Creation That Will Haunt You:</strong> Forget cardboard
cutouts. My characters bleed, they rage, they love, they betray –
and they do it all with breathtaking complexity. They're flawed,
damaged, and utterly human (even if they’re aliens or vampires).
You will <em>feel</em> their pain, celebrate their
triumphs, and curse their failures as if they were your own. I don’t
write characters; I breathe life into them.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>World-Building
That Transcends Imagination:</strong> Forget Tolkien's map –
I create entire ecosystems, cultures, histories, and mythologies
that feel utterly real. You won't just <em>read</em> about
my worlds; you’ll inhabit them. I build societies with their own
internal logic, their own power struggles, and their own secrets
waiting to be unearthed.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Dialogue
That Sizzles:</strong> Forget stilted exposition dumps. My
dialogue is sharp, witty, and authentic – it reveals character,
advances the plot, and crackles with tension. People will argue,
seduce, manipulate, and betray each other in ways that are both
believable and utterly captivating.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
here's where I separate myself from the pack even further: I don’t
just <em>write</em> your story; I become it. I immerse myself
completely in your vision. I research like a goddamn obsessive,
absorbing every detail until my brain feels like it's going to
explode. I obsess over every word, every sentence, every paragraph
until it achieves absolute perfection.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You
might be thinking, “This sounds expensive.” You’re right. It
is. But here's the cold, hard truth: you get what you pay for. Paying
for mediocrity is throwing money away. Investing in <em>me</em>?
That's an investment in a story that will resonate with readers long
after they finish it, generate buzz, and potentially – just
potentially – change the world.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
don’t do revisions out of doubt; I do them because I relentlessly
pursue perfection. Consider every critique you offer as fuel for my
creative engine. You want me to make something darker? More
whimsical? More brutal? Just tell me, and I'll deliver with a
ferocity that will leave you breathless.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Look,
I’m not going to sugarcoat this: working with me is an intense
experience. I demand excellence. I push hard. But the rewards are
immeasurable. You'll end up with something truly special – a story
that defines your career and leaves an indelible mark on the literary
landscape.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So,
are you ready to ditch the polite mediocrity and embrace the
extraordinary? Are you brave enough to unleash my talent upon your
vision? Stop wasting time with amateurs. Let’s do this. Let's
create something <strong>legendary</strong>. Don’t disappoint me.
Because trust me, if you do, you’ll regret it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
tell me about your story. Don't bore me with details. Just give me
the essence—the spark—and let me work my magic. And don’t even
<em>think</em> about questioning my methods. I know what I’m doing,
and frankly, you’re lucky to have me.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
are we done here? Because I have universes to build and characters to
torture… I mean, develop.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
|
sanchit42/qwen3-0.6B-base-29reports-lora256-slim
|
sanchit42
| 2025-06-19T09:54:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:53:02Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed18-2025-06-19
|
morturr
| 2025-06-19T09:52:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T09:52:25Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Daria-best/stella_en_400M_v5_neurlips_papers_fine-tuned
|
Daria-best
| 2025-06-19T09:52:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:14255",
"loss:CachedMultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:NovaSearch/stella_en_400M_v5",
"base_model:finetune:NovaSearch/stella_en_400M_v5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-19T09:37:33Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:14255
- loss:CachedMultipleNegativesRankingLoss
base_model: NovaSearch/stella_en_400M_v5
widget:
- source_sentence: Classifier reduction techniques for improving prediction accuracy
sentences:
- 'INTRODUCTION While neural networks have proved a good tool for processing static
patterns, classi fying sequential information has remained a challenging task.
The problem involves recognizing patterns in a time series of vectors, which requires
forming a good inter nal representation for the sequences. Several researchers
have proposed extending the self-organizing feature map (Kohonen 1989, 1990),
a highly successful static pattern classification method, to sequential information
(Kangas 1991; Samara bandu and Jakubowicz 1990; Scholtes 1991). Below, three of
the most recent of these networks are briefly described. The remainder of the
paper focuses on a new architecture designed to overcome the shortcomings of these
approaches. 578 Daniel L. James, Risto Miikkulainen Recently, Chappel and Taylor
(1993) proposed the Temporal Kohonen Map (TKM) architecture for classifying sequences.
The TKM keeps track of the activation his tory of each node by updating a value
called leaky integrator potential, inspired by the membrane potential in biological
neural systems. The activity of a node depends both on the current input vector
and the previous input vectors, represented by the node''s potential. A given
sequence is processed by mapping one vector at a time, and the last winning node
serves to represent the entire sequence. This way, there needs to be a separate
node for every possible sequence, which is a disadvantage when the number of sequences
to be classified is large. The TKM also suffers from loss of context. Which node
wins depends almost entirely upon the most recent input vectors. For example,
the string baaaa would most likely map to the same node as aaaaa, making the approach
applicable only to short sequences. The SOFM-S network proposed by van Harmelen
(1993) extends TKM such that the activity of each map node depends on the current
input vector and the past activation of all map nodes. The SOFM-S is an improvement
of TKM in that con textual information is not lost as quickly, but it still uses
a single node to represent a sequence. The TRACE feature map (Zandhuis 1992) has
two feature map layers. The first layer is a topological map of the individual
input vectors, and is used to generate a trace (i.e. path) of the input sequence
on the map . The second layer then maps the trace pattern to a single node. In
TRACE, the sequences are represented by distributed patterns on the first layer,
potentially allowing for larger capacity, but it is difficult to encode sequences
where the same vectors repeat, such as baaaa. All a-vectors would be mapped on
the same unit in the first layer, and any number of a-vectors would be indistinguishable.
The architecture described in this paper, SARDNET (Sequential Activation Re tention
and Decay NETwork), also uses a subset of map nodes to represent the sequence
of vectors. Such a distributed approach allows a large number of repre sentations
be "packed" into a small map-like sardines. In the following sections, we will
examine how SARDNET differs from conventional self-organizing maps and how it
can be used to represent and classify a large number of complex sequences. 2 THE
SARDNET ARCHITECTURE Input to SARDNET consists of a sequence of n-dimensional
vectors S V I, V 2 , V 3 , ... , VI (figure 1). The components of each vector
are real values in the interval [0,1]. For example, each vector might represent
a sample of a speech signal in n different frequencies, and the entire sequence
might constitute a spoken word. The SARDNET input layer consists of n nodes, one
for each component in the input vector, and their values are denoted as A (aI,
a2, a3, ... , an). The map consists of m x m nodes with activation Ojk , 1 j,
k m. Each node has an n-dimensional input weight vector Wjk, which determines
the node''s response to the input activation. In a conventional feature map network
as well as in SARDNET, each input vector is mapped on a particular unit on the
map, called the winner or the maximally responding unit. In SARDNET, however,
once a node wins an input, it is made SARDNET: A Self-Organizing Feature Map for
Sequences 579 Sequence of Input vectors S Previous winners Input weight vector
wJk.l Winning unit jlc Figure 1: The SARDNET architecture. A sequence of input
vectors activates units on the map one at a time. The past winners are excluded
from further competition, and their activation is decayed gradually to indicate
position in the sequence. INITIALIZATION: Clear all map nodes to zero. MAIN LOOP:
While not end of seihence 1. Find unactivated weight vector t at best matches
the input. 2. Assign 1.0 activation to that unit. 3. Adjust weight vectors of
the nodes in the neighborhood. 4. Exclude the winning unit from subseent competition.
S. Decrement activation values for all ot er active nodes. RESULT: Sequence representation activated
nodes ordered by activation values Table 1: The SARDNET training algorithm. uneligible
to respond to the subsequent inputs in the sequence. This way a different map
node is allocated for every vector in the sequence. As more vectors come in, the
activation of the previous winners decays. In other words, each sequence of length
1 is represented by 1 active nodes on the map, with their activity indicating
the order in which they were activated. The algorithm is summarized in table 1.
Assume the maximum length ofthe sequences we wish to classify is I, and each input
vector component can take on p possible values. Since there are pn possible input
vectors, Ipn map nodes are needed to represent all possible vectors in all possible
positions in the sequence, and a distributed pattern over the Ipn nodes can be
used to represent all pnl different sequences. This approach offers a significant
advantage over methods in which pnl nodes would be required for pnl sequences.
The specific computations of the SARDNET algorithm are as follows: The winning
node (j, k) in each iteration is determined by the Euclidean distance Djk of the
580 Daniel L. James, Risto Miikkulainen input vector A and the node ''s weight
vector W jk: The unit with the smallest distance is selected as the winner and
activated with 1.0. The weights of this node and all nodes in its neighborhood
are changed according to the standard feature map adaptation rule: where a denotes
the learning rate. As usual, the neighborhood starts out large and is gradually
decreased as the map becomes more ordered. As the last step in processing an input
vector, the activation 7]jk of all active units in the map are decayed proportional
to the decay parameter d: As in the standard feature map , as the weight vectors
adapt, input vectors gradually become encoded in the weight vectors of the winning
units. Because weights are changed in local neighborhoods, neighboring weight
vectors are forced to becom e as similar as possible, and eventually the network
forms a topological layout of the input vector space. In SARDNET, however, if
an input vector occurs multiple times in the same input sequence, it will be represented
multiple times on the map as well. In other words, the map representation expands
those areas of the input space that are visited most often during an input sequence.
3 EXPERIMENTS SARDNET has proven successful in learning and recognizing arbitrary
sequences of binary and real numbers , as well as sequences of phonemic representations
for English words. This section presents experiments on mapping three-syllable
words. This data was selected because it shows how SARDNET can be applied to complex
input derived from a real-world task. 3.1 INPUT DATA The phonemic word representations
were obtained from the CELEX database of the Max Planck Institute for Psycholinguistics
and converted into International Pho netic Alphabet (IPA)-compliant representation,
which better describes similarities among the phonemes. The words vary from five
to twelve phonemes in length. Each phoneme is represented by five values: place,
manner, sound, chromacity and sonor ity. For example , the consonant p is represented
by a single vector (bilabial, stop, unvoiced, nil, nil), or in terms of real numbers,
(.125, .167, .750,0,0). The diph thong sound ai as in "buy" , is represented by
the two vectors (nil, vowel, voiced, front, low) and (nil, vowel , voiced, front-center,
hi-mid), or in real numbers , There are a total of 43 phonemes in this data set,
including 23 consonants and 20 vowels. To represent all phonemic sequences of
length 12, TKM and SOFM-S would SARDNET: A Self-Organizing Feature Map for Sequences
581 Figure 2: Accuracy of SARDNET for different map and data set sizes. The accuracy
is measured as a percentage of unique representations out of all word sequences.
need to have 4512 6.919 map nodes, whereas SARDNET would need only 45 x 12 540
nodes. Of course, only a very small subset of the possible sequences actually
occur in the data. Three data sets consisting of 713,988, and 1628 words were
used in the experiments. If the maximum number of occurrences of phoneme i in
any single sequence is Cj I then the number of nodes SARDNET needs is C L:o Cj
I where N is the number of phonemes . This number of nodes will allow SARDNET
to map each phoneme in each sequence to a unit with an exact representation of
that phoneme in its weights. Calculated this way, SARDNET should scale up very
well with the number of words: it would need 81 nodes for representing the 713
3.2 DENSENESS AND ACCURACY A series of experiments with the above three data sets
and maps of 16 to 81 nodes were run to see how accurately SARDNET can represent
the sequences. Self-organization was quite fast: each simulation took only about
10 epochs, with a 0.45 and the neighborhood radius decreasing gradually from
5-1 to zero. Fig ure 2 shows the percentage of unique representations for each
data set and map SARDNET shows remarkable representational power: accuracy for
all sets is better than 97.7, and SARDNET manages to pack 1592 unique representations
even on the smallest 16-node map. Even when there are not enough units to represent
each phoneme in each sequence exactly, the map is sometimes able to "reuse" units
to represent multiple similar phonemes . For example, assume units with exact
representations for the phonemes a and b exist somewhere on the map, and the input
data does not contain pairs of sequences such as aba-abb, in which it is crucial
to distinguished the second a from the second b. In this case, the second occurrence
of both phonemes could be represented by the same unit with a weight vector that
is the average of a and b. This is exactly what the map is doing: it is finding
the most descriptive representation of the data, given the available resources.
582 Daniel L. James, Risto Miikkulainen Note that it would be possible to determine
the needed C L:f:o Cj phoneme representation vectors directly from the input
data set, and without any learning or a map structure at all, establish distributed
representations on these vectors with the SARDNET algorithm. However, feature
map learning is necessary ifthe number of available representation vectors is
less than C. The topological organization of the map allows finding a good set
of reusable vectors that can stand for different phonemes in different sequences,
making the representation more efficient. 3.3 REPRESENTING SIMILARITY Not only
are the representations densely packed on the map, they are also descriptive in
the sense that similar sequences have similar representations. Figure 3 shows
the final activation patterns on the 36-unit, 713-word map for six example words.
The first two words, "misplacement" and "displacement," sound very similar, and
are represented by very similar patterns on the map. Because there is only one
m in "displacement" , it is mapped on the same unit as the initial m of "misplacement."
Note that the two IDS are mapped next to each other, indicating that the map is
indeed topological, and small changes in the input cause only small changes in
the map representation. Note also how the units in this small map are reused to
represent several different phonemes in different contexts. The other examples
in figure 3 display different types of similarities with "mis placement". The
third word, "miscarried", also begins with "mis", and shares that subpart of the
representation exactly. Similarly, "repayment" shares a similar tail and "pessimist"
the subsequence "mis" in a different part or the word. Because they appear in
a different context, these subsequences are mapped on slightly different units,
but still very close to their positions with "misplacement." The last word, "burundi"
sounds very different, as its representation on the map indicates. Such descriptive
representations are important when the map has to represent in formation that
is incomplete or corrupted with noise. Small changes in the input sequence cause
small changes in the pattern, and the sequence can still be recog nized. This
property should turn out extremely important in real-world applications of SARDNET,
as well as in cognitive science models where confusing similar pat terns with
each other is often plausible behavior. 4 DISCUSSION AND FUTURE RESEARCH Because
the sequence representations on the map are distributed, the number of possible
sequences that can be represented in m units is exponential in m, instead of linear
as in most previous sequential feature map architectures. This denseness together
with the tendency to map similar sequences to similar representations should turn
out useful in real-world applications, which often require scale-up to large and
noisy data sets. For example, SARDNET could form the core of an isolated word
recognition system. The word input would be encoded in duration normalized sequences
of sound samples such as a string of phonemes, or perhaps representations of salient
transitions in the speech signal. It might also be possible to modify SARDNET
to form a more continuous trajectory on the map so that SARDNET itself would take
care of variability in word duration. For example, a SARDNEf : A Self-Organizing
Feature Map for Sequences 583 Figure 3: Example map representations. sequence
of redundant inputs could be reduced to a single node if all these inputs fall
within the same neighborhood. Even though the sequence representations are dense,
they are also descriptive. Cat egory memberships are measured not by labels of
the maximally responding units, but by the differences in the response patterns
themselves. This sort of distributed representation should be useful in cognitive
systems where sequential input must be mapped to an internal static representation
for later retrieval and manipula tion. Similarity-based reasoning on sequences
should be easy to implement, and the sequence can be easily recreated from the
activity pattern on the map. Given part of a sequence, SARDNET may also be modified
to predict the rest of the sequence. This can be done by adding lateral connections
between the nodes in the map layer. The lateral connections between successive
winners would be strengthened during training. Thus, given part of a sequence,
one could follow the strongest lateral connections to complete the sequence. 584
Daniel L. James, Risto Miikkulainen 5 CONCLUSION SARDNET is a novel feature map
architecture for classifying sequences of input vectors. Each sequence is mapped
on a distributed representation on the map, making it possible to pack a remarkable
large number of category representations on a small feature map . The representations
are not only dense, they also represent the similarities of the sequences, which
should turn out useful in cognitive science as well as real-world applications
of the architecture. Acknowledgments Thanks to Jon Hilbert for converting CELEX
data into the International Phonetic Alphabet format used in the experiments.
This research was supported in part by the National Science Foundation under grant
IRI-9309273. References Chappel , G. J., and Taylor, J. G. (1993). The temporal
Kohonen map. Neural Kangas, J. (1991). Time-dependent self-organizing maps for
speech recognition. In Proceedings of the International Conference on Artificial
Neural Networks (Espoo, Finland), 1591-1594. Amsterdam; New York: North-Holland.
Kohonen, T. (1989). Self-Organization and Associative Memory. Berlin; Heidelberg;
New York: Springer. Third edition. Kohonen, T . (1990). The self-organizing map.
Proceedings of the IEEE, 78:1464- Samarabandu, J. K., and Jakubowicz, O. G . (1990).
Principles of sequential fea ture maps in multi-level problems. In Proceedings
of the International Joint Conference on Neural Networks (Washington, DC), vol.
II, 683-686. Hillsdale, NJ: Erlbaum. Scholtes, J. C. (1991). Recurrent Kohonen
self-organization in natural language processing. In Proceedings of the International
Conference on Artificial Neu ral Networks (Espoo, Finland), 1751-1754. Amsterdam;
New York: North Holland. van Harmelen, H. (1993). Time dependent self-organizing
feature map for speech recognition. Master''s thesis, University of Twente, Enschede,
the Netherlands. Zandhuis, J. A . (1992). Storing sequential data in self-organizing
feature maps. Internal Report MPI-NL- TG-492, Max-Planck-Institute fur Psycholinguistik,
Nijmegen, the Netherlands.'
- 'INTRODUCTION Measurement of facial expressions is important for research and
assessment psychi atry, neurology, and experimental psychology (Ekman, Huang,
Sejnowski, Hager, 1992), and has technological applications in consumer-friendly
user interfaces, inter active video and entertainment rating. The Facial Action
Coding System (FACS) is a method for measuring facial expressions in terms of
activity in the underlying facial muscles (Ekman Friesen, 1978). We are exploring
ways to automate FACS. 824 BARTLETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER,
EKMAN Rather than classifying images into emotion categories such as happy, sad,
or sur prised, the goal of this work is instead to detect the muscular actions
that comprise a facial expression. FACS was developed in order to allow researchers
to measure the activity of facial muscles from video images of faces. Ekman and
Friesen defined 46 distinct action units, each of which correspond to activity
in a distinct muscle or muscle group, and produce characteristic facial distortions
which can be identified in the images. Although there are static cues to the facial
actions, dynamic information is a critical aspect of facial action coding. FACS
is currently used as a research tool in several branches of behavioral science,
but a major limitation to this system is the time required to both train human
experts and to manually score the video tape. Automating the Facial Action Coding
System would make it more widely accessible as a research tool, and it would provide
a good foundation for human-computer interactions tools. Why Detect Facial Actions?
Most approaches to facial expression recognition by computer have focused on clas
sifying images into a small set of emotion categories such as happy, sad, or surprised
(Mase, 1991; Yacoob Davis, 1994; Essa Pentland, 1995). Real facial signals,
however, consist ofthousands of distinct expressions, that differ often in only
subtle ways . These differences can signify not only which emotion is occurring,
but whether two or more emotions have blended together, the intensity of the emotion(s),
and if an attempt is being made to control the expression of emotion (Hager Ekman
, An alternative to training a system explicitly on a large number of expression
cat egories is to detect the facial actions that comprise the expressions. Thousands
of facial expressions can be defined in terms of this smaller set of structural
compo nents. We can verify the signal value of these expressions by reference
to a large body of behavioral data relating facial actions to emotional states
which have al ready been scored with FACS. FACS also provides a meanS for obtaining
reliable training data. Other approaches to automating facial measurement have
mistakenly relied upon voluntary expressions, which tend to contain exaggerated
and redundant cues, while omitting some muscular actions altogether (Hager Ekman,
1995). 2 IMAGE DATABASE We have collected a database of image sequences of subjects
performing specified facial actions. The full database contains over 1100 sequences
containing over 150 distinct actions, or action combinations, and 24 different
subjects. The sequences contain 6 images, beginning with a neutral expression
and ending with a high in tensity muscle contraction (Figure 1). For our initial
investigation we used data from 20 subjects and attempted to classify the six
individual upper face actions illustrated in Figure 2. The information that is
available in the images for detecting and discriminating these actions include
distortions in the shapes and relative po sitions of the eyes and eyebrows, the
appearance of wrinkles, bulges, and furrows, in specific regions of the face,
and motion of the brows and eyelids. Prior to classifying the images, we manually
located the eyes, and we used this information to crop a region around the upper
face and scale the images to 360 x 240. The images were rotated so that the eyes
were horizontal, and the luminance was normalized. Accurate image registration
is critical for principal components based approaches. For the holistic analysis
and flow fields, the images were further scaled Classifying Facial Action 825
to 22 x 32 and 66 x 96, respectively. Since the muscle contractions are frequently
asymmetric about the face, we doubled the size of our data set by reflecting each
image about the vertical axis, giving a total of 800 images. Figure 1: Example
action sequences from the database. Figure 2: Examples of the six actions used
in this study. AU 1: Inner brow raiser. 2: Outer brow raiser. 4: Brow lower. 5:
Upper lid raiser (widening the eyes). 6: Cheek raiser. 7: Lid tightener (partial
squint). 3 HOLISTIC SPATIAL ANALYSIS The Eigenface (Thrk Pentland, 1991) and
Holon (Cottrell Metcalfe, 1991) representations are holistic representations
based on principal components, which can be extracted by feed forward networks
trained by back propagation. Previous work in our lab and others has demonstrated
that feed forward networks taking such holistic representations as input can successfully
classify gender from facial images (Cottrell Metcalfe, 1991; Golomb, Lawrence, Sejnowski,
1991). We evaluated the ability of a back propagation network to classify facial
actions given principal components of graylevel images as input. The primary difference
between the present approach and the work referenced above is that we take the
principal components of a set of difference images, which we obtained by subtracting
the first image in the sequence from the subsequent images (see Figure 3). The
variability in our data set is therefore due to the facial distortions and individual
differences in facial distortion, and we have removed variability due to surface-level
differences in appearance. We projected the difference images onto the first N
principal components of the dataset, and these projections comprised the input
to a 3 layer neural network with 10 hidden units, and six output units, one per
action (Figure 3.) The network is feed forward and fully connected with a hyperbolic
tangent transfer function, and was trained with conjugate gradient descent. The
output of the network was determined using winner take all, and generalization
to novel subjects was determined by using the leave-one-out, or jackknife, procedure
in which we trained the network on 19 subjects and reserved all of the images
from one subject for testing. This process was repeated for each of the subjects
to obtain a mean generalization performance across 20 test cases. 826 BARTLETI,
VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN We obtained the best performance
with 50 component projections, which gave 88.6 correct across subjects. The benefit
obtained by using principal components over the 704-dimensional difference images
themselves is not large. Feeding the difference images directly into the network
gave a performance of 84 correct. 6 OUtputs I WT A Figure 3: Left: Example difference
image. Input values of -1 are mapped to black and 1 to white. Right: Architecture
of the feed forward network. 4 FEATURE MEASUREMENT We turned next to explicit
measurement of local image features associated with these actions. The presence
of wrinkles in specific regions of the face is a salient cue to the contraction
of specific facial muscles. We measured wrinkling at the four facial positions
marked in Figure 4a, which are located in the image automatically from the eye
position information. Figure 4b shows pixel intensities along the line segment
labeled A, and two major wrinkles are evident. We defined a wrinkle measure P
as the sum of the squared derivative of the intensity values along the segment
(Figure 4c.) Figure 4d shows P values along line segment A, for a subject performing
each of the six actions. Only AU 1 produces wrinkles in the center of the forehead.
The P values remain at zero except for AU 1, for which it increases with increases
in action intensity. We also defined an eye opening measure as the area of the
visible sclera lateral to the iris. Since we were interested in changes in these
measures from baseline, we subtract the measures obtained from the neutral image.
Pixel Image in Seqence Figure 4: a) Wrinkling was measured at four image locations,
A-D. b) Smoothed pixel intensities along the line labeled A. c) Wrinkle measure.
d) P measured at image location A for one subject performing each of the six actions.
We classified the actions from these five feature measures using a 3-layer neural
net with 15 hidden units. This method performs well for some subjects but not
for Classifying Facial Action 827 Figure 5: Example flow field for a subject performing
AU 7, partial closure of the eyelids. Each flow vector is plotted as an arrow
that points in the direction of motion. Axes give image location. others, depending
on age and physiognomy. It achieves an overall generalization performance of 57
correct. 5 OPTIC FLOW The motion that results from facial action provides another
important source of information. The third classifier attempts to classify facial
actions based only on the pattern of facial motion. Motion is extracted from image
pairs consisting of a neutral image and an image that displays the action to be
classified. An approximation to flow is extracted by implementing the brightness
constraint equation (2) where the velocity (vx,Vy) at each image point is estimated
from the spatial and temporal gradients of the image I. The velocities can only
be reliably extracted at points of large gradient, and we therefore retain only
the velocities from those locations. One of the advantages of this simple local
estimate of flow is speed. It takes 0.13 seconds on a 120 MHz Pentium to compute
one flow field. A resulting flow image is illustrated in Figure 5. We obtained
weighted templates for each of the actions by taking mean flow fields from 10
subjects. We compared novel flow patterns, r to the template ft by the similarity
measure S (3). S is the normalized dot product of the novel flow field with the
template flow field. This template matching procedure gave 84.8 accuracy for novel
subjects. Performance was the same for the ten subjects used in the training 6
COMBINED SYSTEM Figure 6 compares performance for the three individual methods
described in the previous sections. Error bars give the standard deviation for
the estimate of gener alization to novel subjects. We obtained the best performance
when we combined all three sources of information into a single neural network.
The classifier is a 828 BAR1LETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN
I 6 Output I WTA Classifier Figure 6: Left: Combined system architecture. Right:
Performance comparisons. Holistic v. Flow Feature v. Row Feature v. Holistic Figure
7: Performance correlations among the three individual classifiers. Each data
point is performance for one of the 20 subjects. feed forward network taking 50
component projections, 5 feature measures, and 6 template matches as input (see
Figure 6.) The combined system gives a generalization performance of 92, which
is an im provement over the best individual method at 88.6. The increase in performance
level is statistically significant by a paired t-test. While the improvement is
small, it constitutes about 30 of the difference between the best individual classifier
and perfect performance. Figure 6 also shows performance of human subjects on
this same dataset. Human non-experts can correctly classify these images with
about 74 accuracy. This is a difficult classification problem that requires considerable
training for people to be able to perform well. We can examine how the combined
system benefits from multiple input sources by looking at the cprrelations in
performance of the three individual classifiers. Combining estimators is most
beneficial when the individual estimators make very different patterns of errors.1
The performance of the individual classifiers are com pared in Figure 7. The holistic
and the flow field classifiers are correlated with a coefficient of 0.52. The
feature based system, however, has a more independent pattern of errors from the
two template-based methods. Although the stand-alone performance of the feature
based system is low, it contributes to the combined system because it provides
estimates that are independent from the two template-based systems. Without the
feature measures, we lose 40 of the improvement. Since we have only a small number
of features, this data does not address questions about whether templates are
better than features, but it does suggest that local features plus templates may
be superior to either one alone, since they may have independent patterns of errors.
iTom Dietterich, Connectionists mailing list, July 24, 1993. Classifying Facial
Action 829 7 DISCUSSION We have evaluated the performance of three approaches
to image analysis on a dif ficult classification problem. We obtained the best
performance when information from holistic spatial analysis, feature measurements,
and optic flow fields were com bined in a single system. The combined system classifies
a face in less than a second on a 120 MHz Pentium. Our initial results are promising
since the upper facial actions included in this study represent subtle distinctions
in facial appearance that require lengthy training for humans to make reliably.
Our results compare favorably with facial expression recognition systems developed
by Mase (1991), Yacoob and Davis (1994), and Pad gett and Cottrell (1995), who
obtained 80, 88, and 88 accuracy respectively for classifying up to six full face
expressions. The work presented here differs from these systems in that we attempt
to detect individual muscular actions rather than emo tion categories, we use
a dataset of labeled facial actions, and our dataset includes low and medium intensity
muscular actions as well as high intensity ones. Essa and Pentland (1995) attempt
to relate facial expressions to the underlying musculature through a complex physical
model of the face. Since our methods are image-based, they are more adaptable
to variations in facial structure and skin elasticity in the subject population.
We intend to apply these techniques to the lower facial actions and to action
com binations as well. A completely automated method for scoring facial actions
from images would have both commercial and research applications and would reduce
the time and expense currently required for manual scoring by trained observers.
Acknow ledgments This research was supported by Lawrence Livermore National Laboratories,
Intra University Agreement B291436, NSF Grant No. BS-9120868, and Howard Hughes
Medical Institute. We thank Claudia Hilburn for image collection. References Cottrell,
G., Metcalfe, J. (1991): Face, gender and emotion recognition using holons. In
Advances in Neural Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo:
Ekman, P., Friesen, W. (1978): Facial Action Coding System: A Technique for the
Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press.
Ekman, P., Huang, T., Sejnowski, T., Hager, J. (1992): Final Report to NSF of
the Planning Workshop on Facial Expression Understanding. Available from HIL-0984,
UCSF, San Francisco, CA 94143. Essa, I., Pentland, A. (1995). Facial expression
recognition using visually extracted facial action parameters. Proceedings of
the International Workshop on Automatic Face- and Gesture-Recognition. University
of Zurich, Multimedia Laboratory. Golomb, B., Lawrence, D., Sejnowski, T. (1991).
SEXnet: A neural network identifies sex from human faces. In Advances in Neural
Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo: Morgan Kaufman:
572 - 577. Hager, J., Ekman, P., (1995). The essential behavioral science of
the face and gesture that computer scientists need to know. Proceedings of the
International Workshop on Automatic Face-and Gesture-Recognition. University of
Zurich, Multimedia Laboratory. Mase, K. (1991): Recognition of facial expression
from optical flow. IEICE Transactions Padgett, C., Cottrell, G., (1995). Emotion
in static face images. Proceedings of the Institute for Neural Computation Annual
Research Symposium, Vol 5. La Jolla, CA. Turk, M., Pentland, A. (1991): Eigenfaces
for Recognition. Journal of Cognitive Neu Yacoob, Y., Davis, L. (1994): Recognizin
human facial expression. University of Maryland Center for Automation Research
Technical Report No. 706.'
- 'Introduction Certain classification problems, such as recognizing the digits
of a hand written zip code, require the assignment of each object to a class.
Others, involving relatively small amounts of data and high risk, call for indecision
until more data become available. Examples in such areas as medical diagnosis,
stock trading and radar detection are well known. The training data for the classifier
in both cases will correspond to firmly labeled members of the competing classes.
(A patient may be Presently a Senior Research Associate of the National Research
Council at M . S. 210-9, NASA Ames Research Center, Moffett Field, CA 94035, on
sabbatical leave from the Technion. Consistent Classification, Firm and Soft 327
either ill or healthy. A stock price may increase, decrease or stay the same).
Yet, the classification of new objects need not be firm. (A given patient may
be kept in hospital for further observation. A given stock need not be bought
or sold every day). We call classification of the first kind "firm" and classification
of the second kind "soft". The latter is not the same as training the classifier
with a "don''t care" option, which would be just another firm labeling option,
as "yes" and "no", and would require firm classification. A classifier that correctly
classifies the training data is called "consistent". Consistent classifier reductions
have been considered in the contexts of the nearest neighbor criterion (Hart,
1968) and decision trees (Holte, In this paper we present a geometric approach
to consistent firm and soft classifi cation. The classifiers are based on unions
of local separators, which cover all the labeled points of a given class, and
separate them from the others. We propose a consistent reduction of the nearest
neighbor classifier and derive its expected design complexity and the expected
classifier size. The nearest neighbor classifier and its consistent derivatives
perform "firm" classification. Soft classification is performed by unions of maximal
-volume spherical local separators. A domain of indecision is created near the
boundary between the two sets of class-labeled points, and in regions where there
is no data. We propose an economically motivated benefit func tion for a classifier
as the difference between the probabilities of success and failure. Employing
the respective benefit functions, the advantage of soft classification over firm
classification is shown to depend on the rate of indecision. The performances
of the proposed algorithms in predicting stock behavior are compared to those
of the nearest neighbor method. 2 Consistent Firm Classification Consider a finite
set of points X {X(i), i 1, ... , N} in some subset of Rn, the real space of
dimension n . Suppose that each point of X is assigned to one of two classes,
and let the corresponding subsets of X, having N1 and N2 points, respectively,
be denoted Xl and X 2 We shall say that the two sets are labeled L1 and L 2 ,
respectively. It is desired to divide Rn into labeled regions, so that new, .
unlabeled points can be assigned to one of the two classes. We define a local
separator of a point x of Xl with respect to X 2 as a convex set, s(xI2), which
contains x and no point of X2. A separator family is defined as a rule that produces
local separators for class-labeled points. We call the set of those points of
Rn that are closer to a point x E Xl than to any point of X2 the minimum-distance
local separator of x with respect to X2. We define the local clustering degree,
c, of the data as the expected fraction of data points that are covered by a local
minimum -distance separator. The nearest neighbor criterion extends the class
assignment of a point x E Xl to its minimum-distance local separator. It is clearly
a consistent and firm classifier whose memory size is O(N). Hart''s Condensed
Nearest Neighbor (CNN) classifier (Hart, 1968) is a consis tent subset of the
data points that correctly classifies the entire data by the nearest neighbor
method. It is not difficult to show that the complexity of the algorithm 328 Y.
Baram proposed by Hart for finding such a subset is O(N3). The expected memory
re quirement (or classifier size) has remained an open question. We propose the
following Reduced Nearest Neighbor (RNN) classifier: include a labeled point in
the consistent subset only if it is not covered by the minimum distance local
separator of any of the points of the same class already in the subset. It can
be shown (Baram, 1996) that the complexity of the RNN algorithm is O(N2). and
that the expected classifier size is O(IOgl(I-C) N). It can also be shown that
the latter bounds the expected size of the CNN classifier as well. It has been
suggested that the utility of the Occam''s razor in classification would "Given
a choice between two plausible classifiers that perform identically on the data
set, the simpler classifier is expected to classify correctly more objects outside
the training set". The above statement is disproved by the CNN and the RNN classifiers,
which are strict consistent reductions of the nearest neighbor classifier, likely
to produce more errors. 3 Soft Classification: Indecision Pays, Sometimes When
a new, unlabeled, point is closely surrounded by many points of the same class,
its assignment to the same class can be said to be unambiguously supported by
the data. When a new point is surrounded by points of different classes, or when
it is relatively far from any of the labeled points, its assignment to either
class can be said to be unsupported or ambiguously supported by the data. In the
latter cases, it may be more desirable to have a certain indecision domain, where
new points will not be assigned to a class. This will translate into the creation
of indecision domains near the boundary between the two sets of labeled points
and where there is no data. We define a separntor S(112) of Xl with respect to
X2 as a set that includes Xl and excludes X2. Given a separator family, the union
of local separators S(x(i) 12) of the points is a separator of Xl with respect
to X2. It consists of NI local separators. Let XI,c be a subset of Xl. The set
will be called a consistent separator of Xl with respect to X2 if it contains
all the points of X 1. The set XI,c will then be called a consistent subset with
respect to the given separator family. Let us extend the class assignment of each
of the labeled points to a local separator of a given family and maximize the
volume of each of the local separators without Consistent Classification, Finn
and Soft 329 including in it any point of the competing class. Let Sc(112) and
Sc(211) be consis tent separators of the two sets, consisting of maximal-volume
(or, simply, maximaQ local separators of labeled points of the corresponding classes.
The intersection of Sc(112) and Sc(211) defines a conflict and will be called
a domain of ambiguity of the first kind. A region uncovered by either separator
will be called a domain of ambiguity of the second kind. The union of the domains
of ambiguity will be des ignated the domain of indecision. The remainders of the
two separators, excluding their intersection, define the conflict-free domains
assigned to the two classes. The resulting "soft" classifier rules out hard conflicts,
where labeled points of one class are included in the separator of the other.
Yet, it allows for indecision in areas which are either claimed by both separators
or claimed by neither. Let the true class be denoted y (with possible values,
e.g., y1 or y2) and let the classification outcome be denoted y. Let the probabilities
of decision and indecision by the soft classifier be denoted Pd and Pid, respectively
(of course, P id 1 - Pd), and let the probabilities of correct and incorrect
decisions by the firm and the soft classifiers be denoted Pfirm {y y}, Pfirm
{y P y}, P soft {y y} and Psoft {y P y}, respectively. Finally, let the joint
probabilities of a decision being made by the soft classifier and the correctness
or incorrectness of the decision be denoted, respec tively, Psoft { d, Y y} and
P soft { d, Y P y} and let the corresponding conditional probabilities be denoted
Psoft {y y I d} and Psoft {y P y I d}, respectively. We define the benefit of
using the firm classifier as the difference between the prob ability that a point
is classified correctly by the classifier and the probability that it is misclassified:
This definition is motivated by economic consideration: the profit produced by
an investment will be, on average, proportional to the benefit function. This
will become more evident in a later section, were we consider the problem of stock
trading. For a soft classifier, we similarly define the benefit as the difference
between the probability of a correct classification and that of an incorrect one
(which, in an economic context, assumes that indecision has no cost, other than
the possible loss of profit). Now, however, these probabilities are for the joint
events that a classification is made, and that the outcome is correct or incorrect,
respectively: Soft classification will be more beneficial than firm classification
if Bsoft Bfirm'' which may be written as For the latter to be a useful condition,
it is necessary that Pfirm {y y} 0.5, Psofdy y I d} 0.5 and Psoft {y y I
d} Pfirm {y y}. The latter will be normally satisfied, since points of the same
class can be expected to be denser under the corresponding separator than in the
indecision domain. In other words, 330 Y. Baram the error ratio produced by the
soft classifier on the decided cases can be expected to be smaller than the error
ratio produced by the firm classifier, which decides on all the cases. The satisfaction
of condition (5) would depend on the geometry of the data. It will be satisfied
for certain cases, and will not be satisfied for others. This will be numerically
demonstrated for the stock trading problem. The maximal local spherical separator
of x is defined by the open sphere centered at x, whose radius r(xI2) is the distance
between x and the point of X2 nearest to x. Denoting by s(x, r) the sphere of
radius r in Rn centered at x, the maximal local separator is then sM(xI2) s(x,
r(xI2)). A separator construction algorithm employing maximal local spherical
separators is described below. Its complexity is clearly O(N2). Let Xl Xl. For
each of the points xci) of Xl, find the minimal distance to the points of X 2 Call
it r(x(i) 12). Select the point x(i) for which r(x(i) 12) 2: r(x(j) 12), j f:
i, for the consistent subset. Eliminate from Xl all the points that are covered
by SM(X(i) 12). Denote the remaining set Xl. Repeat the procedure while Xl is
non-empty. The union of the maximal local spherical separators is a separator
for Xl with respect to X 2 . 4 Example: Firm and soft prediction of stock behaviour
Given a sequence of k daily trading ("close") values of a stock, it is desired
to predict whether the next day will show an increase or a decrease with respect
to the last day in the sequence. Records for ten different stocks, each containing,
on average, 1260 daily values were used. About 60 percent of the data were used
for training and the rest for testing. The CNN algorithm reduced the data by 40
while the RNN algorithm reduced the data by 35. Results are show in Fig. 1. It
can be seen that, on average, the nearest neighbor method has produced the best
results. The performances of the CNN and the RNN classifiers (the latter producing
only slightly better results) are somewhat lower. It has been argued that performance
within a couple of percentage points by a reduced classifier supports the utility
of Occam''s razor (Holte, 1993). However, a couple of percentage points can be
quite meaningful in stock trading. In order to evaluate the utility of soft classification
in stock trading, let the predic tion success rate of a firm classifier, be denoted
f and that of a soft classifier for the decided cases s. For a given trade, let
the gain or loss per unit invested be denoted q, and the rate of indecision of
the soft classifier ir. Suppose that, employing the firm classifier, a stock is
traded once every day (say, at the "close" value), and that, employing the soft
classifier, it is traded on a given day only if a trade is decided by the classifier
(that is, the input does not fall in the indecision domain). The expected profit
for M days per unit invested is 2(1 - 0.5)qM for the firm classifier and 2(s -
0.5)q(l-ir)M for the soft classifier (these values disregard possible com mission
and slippage costs). The soft classifier will be preferred over the firm one if
the latter quantity is greater than the former, that is, if which is the sample
representation of condition (5) for the stock trading problem. Consistent Classification,
Firm and Soft 331 ni . llIifip. llCce bene.fit Figure 1: Success rates in the
prediction of rize and fall in stock values. Results for the soft classifier,
applied to the stock data, are presented in Fig. 1. The indecision rates and the
success rates in the decided cases are then specified along with a benefit sign.
A positive benefit represents a satisfaction of condition (6), with ir, f and
s replaced by the corresponding sample values given in the table. This indicates
a higher profit in applying the soft classifier over the application of the nearest
neighbor classifier. A negative benefit indicates that a higher profit is produced
by the nearest neighbor classifier. It can be seen that for two of the stocks
(xdssi and xelrnf) soft classification has produced better results than firm classification,
and for the remaining eight stocks finn classification by the nearest neighbor
method has produced better results. 5 Conclusion Solutions to the consistent classification
problem have been specified in tenns of local separators of data points of one
class with respect to the other. The expected complexities of the proposed algorithms
have been specified, along with the ex pected sizes of the resulting classifiers.
Reduced consistent versions of the nearest neighbor classifier have been specified
and their expected complexities have been derived. A notion of "soft" classification
has been introduced an algorithm for its implementation have been presented and
analyzed. A criterion for the utility of such classification has been presented
and its application in stock trading has been demonstrated. Acknowledgment The
author thanks Dr. Amir Atiya of Cairo University for providing the stock data
used in the examples and for valuable discussions of the corresponding results.
332 y. Baram References Baram Y. (1996) Consistent Classification, Firm and Soft,
CIS Report No. 9627, Center for Intelligent Systems, Technion, Israel Institute
of Technology, Haifa 32000, Israel. Baum, E. B . (1988) On the Capabilities of
Multilayer Perceptrons, J. Complexity, Hart, P. E. (1968) The Condensed Nearest
Neighbor Rule, IEEE Trans. on Infor Holte, R. C. (1993) Very Simple Classification
Rules Perform Well on Most Com monly Used databases, Machine Learning, Vol. 11,
No. 1 pp. 63 - 90. Rosenblatt, F. (1958) The Perceptron: A Probabilistic Model
for Information Stor age and Organization in the Brain, Psychological Review,
Vol. 65, pp. 386 - 408. Webb, G. 1. (1996) Further Experimental Evidence against
the Utility of Occam''s Razor, J. of Artificial Intelligence Research 4, pp. 397
- 147.'
- source_sentence: Functional role of neurons in primary auditory cortex
sentences:
- 'Introduction Learning in biological systems is of great importance. But while
cognitive learning (or "problem solving") is typically abrupt and generalizes
to analogous problems, perceptual skills appear to be acquired gradually and specifically:
Human subjects cannot generalize a perceptual discrimination skill to solve similar
problems with different attributes. For example, in a visual discrimination task
(Fig. 1), a subject who is trained to discriminate motion directions between 43
and 47 cannot use 46 Z. Liu and D . Weinshall this skill to discriminate 133 from
137. Generalization has been found only when stimuli of different attributes are
interleaved [7, 10], or when the task is easier [6, 1]. For example, a subject
who is trained to discriminate 41 from 49 can later readily discriminate 131
from 139 [6]. The specificity of learning has been so far used to support the
hypothesis that perceptual learning embodies neuronal modifications in the brain''s
stimulus-specific cortical areas (e.g., visual area MT) [9,3, 2, 5, 8, 4]. In
contrast to previous results of learning specificity, we show in two experiments
in Section 2 that learning in motion discrimination generalizes in all cases where
speci ficity was thought to exist, although the mode of generalization varies.
(1) When the task is difficult, it is direction specific in the traditional sense;
but learning in a new direction accelerates. (2) When the task is easy, it generalizes
to all direc tions after training in only one direction. While (2) is consistent
with the findings reported in [6, 1], (1) demonstrate that generalization is the
rule, not an exception limited only to "easy" stimuli. 2 Perceptual learning experiments
stimUIUs''-- - -response time SOOms Figure 1: Schematic of one trial. Left:
the stimulus was a random dot pattern viewed in a circular aperture, spanning
8 of visual angle, moving in a given primary direction (denoted dir). The primary
direction was chosen from 12 directions, separated by 30. Right: the direction
of each of the two stimuli was randomly chosen from two candidate directions (dir D.2).
The subject judged whether the two stimuli moved in the same or different directions.
Feedback was provided. The motion discrimination task is described in Fig. 1.
In each trial, the subject was presented with two consecutive stimuli, each moving
in one of two possible directions (randomly chosen from the two directions dir 2
and dir - 2). The directional difference II between the two stimuli was 8 in the
easy condition, and 4 in the difficult condition. The experiment was otherwise
identical to that in [2] that used II 3, except that our stimuli were displayed
on an SGI computer monitor. II 8 was chosen as the easy condition because most
subjects found it relatively easy to learn, yet still needed substantial training.
2.1 A difficult task We trained subjects extensively in one primary direction
with a difficult motion discrimination task ( 4), followed by extensive training
in a second primary direction. The two primary directions were sufficiently different
so direct trans fer between them was not expected [2] (Fig. 2). Subjects'' initial
performance in both directions was comparable, replicating the classical result
of stimulus specific learning (no direct transfer). However, all subjects took
only half as many train ing sessions to make the same improvement in the second
direction. All subjects had extensive practice with the task prior to this experiment,
thus the acceleration cannot be simply explained by familiarity. Mechanisms of
Generalization in Perceptual Learning 47 Our results show that although perceptual
learning did not directly transfer in this difficult task, it did nevertheless
generalize to the new direction. The generalization was manifested as 100 increase
in the rate of learning in the second direction. It demonstrates that the generalization
of learning, as manifested via direct transfer and via increase in learning rate,
may be thought of as two extremes of a continuum of possibilities. S.sslon S ...
lon Figure 2: Subjects DJ and ZL needed 20 training sessions in the first direction,
and nine in the second; subject ZJX needed seven training sessions in the first,
and four in the second. The rate of learning (the amount of improvement per session)
in the second direction is significantly greater than in the first (t(2) 13.41,p 0.003).
We first measured the subjects'' baseline performance in an easy task - the dis
crimination of motion directions 8 apart - in 12 primary directions (64 trials
each, randomly interleaved). We then trained four subjects in one oblique primary
direction (chosen randomly and counter-balanced among subjects) for four sessions,
each with 700 trials. Finally, we measured again the subjects'' performance in
all directions. Every subject improved in all directions (Fig. 3). The performance
of seven control subjects was measured without intermediate training; two more
con trol subjects were added who were "trained" with similar motion stimuli but
were asked to discriminate a brightness change instead. The control subjects improved
as well, but significantly less (!ld'' 0.09 vs. 0.78, Fig. 3). Our results clearly
show that training with an easy task in one direction leads to immediate improvement
in other directions. Hence the learned skill generalized across motion directions.
3 A computational model We will now adopt a general framework for the analysis
of perceptual learning results, using the language of signal detection theory.
Our model accounts for the results in this paper by employing the constraint of
limited computational resources. The model''s assumptions are as follows. 1. In
each trial, each of the two stimuli is represented by a population of measure
ments that encode all aspects of the stimulus, in particular, the output of localized
direction detectors. The measurements are encoded as a vector. The decision as
to whether the two stimuli are the same or not is determined by the difference
of the two vectors. 2. Each component of the input measurements is characterized
by its sensitivity for the discrimination task, e.g., how well the two motion
directions can be dis criminated apart based on this component. The entire population
itself is generally divided into two sets: informative - measurements with significant
sensitivity, and 48 Z. Liu and D. Weinshall 270 Sltjects Figure 3: Left: Discrimination
sensitivity d'' of subject JY who was trained in the primary direction 3000 Middle
: d'' of control subject YHL who had no training in between the two measurements.
Right: Average d'' (and standard error) for all subjects before and after training.
Trained: results for the four trained subjects. Note the substantial improvement
between the two measurements. For these subjects, the d'' measured after training
is shown separately for the trained direction (middle column) and the remaining
directions (right column). Control: results for the nine control subjects. The
control subjects improved their performance significantly less than the trained
subjects (tld'' uninformative - measurements with null sensitivity. In addition,
informative mea surements may vary greatly in their individual sensitivity. When
many have high sensitivity, the task is easy. When most have low sensitivity,
the task is difficult. We assume that sensitivity changes from one primary direction
to the next, but the population of informative measurements remains constant.
For example, in our psychophysical task localized directional signals are likely
to be in the informative set for any motion direction, though their individual
sensitivity will vary based on specific motion directions. On the other hand,
local speed signals are never informative and therefore always belong to the uninformative
set. 3. Due to limited computational capacity, the system can, at a time, only
process a small number of components of the input vector. The decision in a single
trial is therefore made based on the magnitude of this sub-vector, which may vary
from trial to trial. In each trial the system rates the processed components of
the sub-vector according to their sensitivity for the discrimination task. After
a sufficient number of trials (enough to estimate all the component sensitivities
of the sub-vector), the system identifies the least sensitive component and replaces
it in the next trial with a new random component from the input vector. In effect,
the system is searching from the input vector a sub-vector that gives rise to
the maximal discrimination sensitivity. Therefore the performance of the system
is gradually improving, causing learning from session to session in the training
direction. 4. After learning in one training direction, the system identifies
the sets of in formative and uninformative measurements and include in the informative
set any measurement with significant (though possibly low) sensitivity. In the
next training direction, only the set of informative measurements is searched.
The search becomes more efficient, and hence the acceleration of the learning
rate. This accounts for the learning between training directions. We further assume
that each stimulus generates a signal that is a vector of N measurements: {Idl''
We also assume that the signal for the discrimination task is the difference between
two stimulus measurements: x {Xi}l'' Xi tlli. The Mechanisms of Generalization
in Perceptual Learning 49 samedifferent discrimination task is to decide whether
x is generated by noise - the null vector 0, or by some distinct signal - the
vector S. At time t a measurement vector xt is obtained, which we denote x st
if it is the signal S, and xnt otherwise. Assume that each measurement in xt is
a normal We measure the sensitivity d'' of each component. Since both the signal
and noise are assumed to be normal random variables, the sensitivity of the i-th
measurement in the discrimination task is d lJ.lilai. Assuming further that the
measurements are independent of each other and of time, then the combined sensitivity
of M measurements is d'' JL:l (J.ldai)2. 3.1 Limited resources: an assumption
We assume that the system can simultaneously process at most M « N of the original
N measurements. Since the sensitivity d of the different measurements varies,
the discrimination depends on the combined sensitivity of the particular set of
M measurements that are being used. Learning in the first training direction,
therefore, leads to the selection of a "good" subset of the measurements, obtained
by searching in the measurement space. After searching for the best M measurements
for the current training direction, the system divides the measurements into two
sets: those with non-negligible sensitivity, and those with practically null sensitivity.
This rating is kept for the next training direction, when only the first set is
searched. One prediction of this model is that learning rate should not increase
with exposure only. In other words, it is necessary for subjects to be exposed
to the stimulus and do the same discrimination task for effective inter-directional
learning to take place. For example, assume that the system is given N measurements
: N 2 motion direction signals and N 2 speed signals. It learns during the first
training direction that the N 2 speed signals have null sensitivity for the direction
discrimination task, whereas the directional signals have varying (but significant)
sensitivity. In the second training direction, the system is given the N measurements
whose sensitivity profile is different from that in the first training direction,
but still with the property that only the directional signals have any significant
sensitivity (Fig. 4b). Based on learning in the first training direction, the
system only searches the measurements whose sensitivity in the first training
direction was significant, namely , the N 2 directional signals. It ignores the
speed signals. N ow the asymptotic performance in the second direction remains
unchanged because the most sensitive measurements are within the searched population
- they are directional signals. The learning rate, however, doubles since the
system searches a space half as large. 3.2 Simulation results To account for the
different modes of learning, we make the following assumptions. When the task
is easy, many components have high sensitivity d''. When the task is difficult,
only a small number of measurements have high d''. Therefore, when the task is
easy, a subset of M measurements that give rise to the best performance is found
relatively fast. In the extreme, when the task is very easy (e.g., all the mea
surements have very high sensitivity), the rate of learning is almost instantaneous
and the observed outcome appears to be transfer. On the other hand, when the task
is difficult, it takes a long time to find the M measurements that give rise to
the best performance, and learning is slow. 50 Z. Liu and D . Weinshall Figure
4: Hypothetical sensitivity profile for a population of measurements of speed
and motion direction. Left: First training direction - only the motion direction
measure ments have significant sensitivity (d'' above 0.1), with measurements
around 450 having the highest d''. Right: Second direction - only the motion direction
measurements have significant sensitivity, with measurements around 1350 having
the highest d''. The detailed operations of the model are as follows. In the first
training direction, the system starts with a random set of M measurements. In
each trial and using feedback, the mean and standard deviation of each measurement
is computed: J.L:t, ar for the signal and J.Lit, art for the noise. In the next
trial, given M measurements x as the signal if 5 0, and noise otherwise. At time
T, the worst measurement is identified as argval of mini d, d 21J.Lf - J.LiTI(ar art).
It is then replaced randomly from one of the remaining N - M measurements. The
learning and decision making then proceed as above for another T iterations. This
is repeated until the set of chosen measurements stabilizes. At the end, the decision
is made based on the set of M measurements that have the highest sensitivities.
Figure 5: Simulated performance (percent correct) as function of time. Left: Difficult
condition - the number of measurements with high d is small (4 out of 150); there
is no transfer from the first to the second training direction, but the learning
rate is increased two-fold. This graph is qualitatively similar to the results
shown in the top row of Fig. 2. Right: Easy condition - the number of measurements
with high d is large (72 out of 150); there is almost complete transfer from the
first to the secQnd training direction. At the very beginning of training in the
second direction, based on the measured d in the first direction, the measurement
population is labeled as informative - those with d larger than the median value,
and uninformative - the remaining measurements. The learning and decision making
proceeds as above, while only informative measurements are considered during the
search. In the simulation we used N 150 measurements, with M 4. Half of the
N measurements (the informative measurements) had significant d. In the second
training direction, the sensitivities of the measurements were randomly changed,
but only the informative measurements had significant d. By varying the number
of measurements with high di in the population of informative measurements, we
get the different modes of generalization(Fig. 5). Mechanisms of Generalization
in Perceptual Learning 51 4 Discussions In contrast to previous results on the
specificity of learning, we broadened the search for generalization beyond traditional
transfer. We found that generalization is the rule, rather than an exception.
Perceptual learning of motion discrimination generalizes in various forms: as
acceleration of learning rate (Exp. 1), as immediate improvement in performance
(Exp. 2). Thus we show that perceptual learning is more similar to cognitive learning
than previously thought, with both stimulus specificity and generalization as
important ingredients. In our scheme, the assumption of the computational resource
forced the discrimina tion system to search in the measurement space. The generalization
phenomena - transfer and increased learning rate - occur due to improvement in
search sensitiv ity from one training direction to the next, as the size of the
search space decreases with learning. Our scheme also predicts that learning rate
should only improve if the subject both sees the stimulus and does the relevant
discrimination task, in agreement with the results in Exp. 1. Importantly, our
scheme does not predict transfer per se, but instead a dramatic increase in learning
rate that is equivalent to transfer. Our model is qualitative and does not make
any concrete quantitative predictions. We would like to emphasize that this is
not a handicap of the model. Our goal is to show , qualitatively, that the various
generalization phenomena should not surprise us, as they should naturally occur
in a generic discrimination system with limited computational resources. Thus
we argue that it may be too early to use existing perceptual learning results
for the identification of the cortical location of perceptual learning, and the
levels at which modifications are taking place. References [1] Ahissar M and Hochstein
S. Task difficulty and the specificity of perceptual [2] Ball K and Sekuler R.
A specific and enduring improvement in visual motion [3] Fiorentini A and Berardi
N. Perceptual learning specific for orientation and [4] Gilbert C D. Early perceptual
learning. PNAS, 91:1195-1197, 1994. [5] Karni A and Sagi D. Where practice makes
perfect in texture discrimination: Evidence for primary visual cortex plasticity.
PNAS, 88:4966-4970, 1991. [6] Liu Z. Learning a visual skill that generalizes.
Tech. Report, NECI, 1995. [7] Liu Z and Vaina L M. Stimulus specific learning:
a consequence of stimulus specific experiments? Perception, 24(supplement):21,
1995. [8] Poggio T, Fahle M, and Edelman S. Fast perceptual learning in visual
hyper [9] Ramachandran V S. Learning-like phenomena in stereopsis. Nature, 262:382-
[10] Rubin N, Nakayama K, and Shapley R. Abrupt learning and retinal size specificity
in illusory-contour perception. Current Biology, 7:461-467,1997.'
- 'Introduction Application of mean-field theory to solve the problem of inference
in Belief Net works(BNs) is well known [1]. In this paper we will discuss a variational
mean-field theory and its application to BNs, sigmoidal BNs in particular. We
present a variational derivation of the mean-field theory, proposed by Plefka[2].
The theory will be developed for a stochastic system, consistin of N binary random
variables, Si E {O, I}, described by the energy function E(S), and the following
Boltzmann Gibbs distribution at a temperature T: The application of this mean-field
method to Boltzmann Machines(BMs) is already done [3]. A large class of BN s are
described by the following energy function: The application of the mean-field
theory for such energy functions is not straight forward and further approximations
are needed. We propose a new approximation scheme and discuss its utility for
sigmoid networks, which is obtained by substitut- f(x) 1 eX in the above energy
function. The paper is organized as follows. In section 2 we present a variational
derivation of Plefka''s mean-field theory. In section 3 the theory is extended
to sigmoidal belief networks. In section 4 empirical evaluation is done. Concluding
remarks are given in section 5. 2 A Variational mean-field theory Plefka,[2] proposed
a mean-field theory in the context of spin glasses. This theory can, in principle,
yield arbitrarily close approximation to log Z. In this section we present an
alternate derivation from a variational viewpoint, see also [4],[5]. Let ''Y be
a real parameter that takes values from 0 to 1. Let us define a ''Y dependent
partition and distribution function, Note that Zl Z and Pl p. Introducing an
external real vector, Blet us rewrite where Z is the partition function associated
with the distribution function p-y given Using Jensen''s Inequality, (e-X ) e-(x),
we get where Taking logarithms on both sides of (4) we obtain The right hand side
is defined as a function of u and ''Y via the following assumption. Invertibility
assumption: For each fixed u and ''Y, (5) can be solved for if If the invertibility
assumption holds then we can use u as the independent vector (with B dependent
on u) and rewrite (6) as where G is as defined in (7) This then gives a variational
feel: treat it as an external variable vector and choose it to minimize G for
a fixed ''Y. The stationarity conditions of the above minimization problem yield
At the minimum point we have the equality G - log Z"(. It is difficult to invert
(5) for''Y :I 0, thus making it impossible to write an algebraic expression for
G for any nonzero ''Y. At ''Y 0 the inversion is straightforward and one obtains
A Taylor series approach is then undertaken around ''Y 0 to build an approximation
to G. Define Then G M can be considered as an approximation of G. The stationarity
conditions are enforced by setting In this paper we will restrict ourselves to
M 2. To do this we need to evaluate the following derivatives where For M 1
we have the standard mean-field approach. The expression for M 2 can be identified
with the TAP correction. The term (10) yields the TAP term for BM energy function.
3 Mean-field approximations for BNs The method, as developed in the previous section,
is not directly useful for BNs because of the intractability of the partial derivatives
at ''Y O. To overcome this problem, we suggest an approximation based on Taylor
series expansion. Though in this paper we will be restricting ourselves to sigmoid
activation function, this method is applicable to other activation functions also.
This method enables cal culation of all the necessary terms required for extending
Plefka''s method for BN s. Since, for BN operation T is fixed to 1, T will be
dropped from all equations in the rest of the paper. Let us define a new energy
function where Since (3 is the important parameter, E((3, S, il, w) will be referred
to as E((3) so as to avoid notational clumsiness. We use a Taylor series approximation
of E((3) with respect to (3. Let us define If Ee approximates E, then we can write
Let us now define the following function The Bi are assumed to be functions of
il, (3, ''Y, which are obtained by inverting equations (12) By replacing E by
Ee in (15) we obtain Ae where the definition of il is obtained by replacing E
by Ee. In view of (14) one can consider Ae as an approximation to A. This observation
suggests an approximation The required terms needed in the Taylor expansion of
G in ''Y can be approximated The biggest advantage in working with Ae rather than
G is that the partial deriva tives of Ae with respect to ''Y at ''Y 0 and (3 1
can be expressed as functions of il. We define (18) Figure 1: Three layer BN (2
x 4 x 6) with top down propagation of beliefs. The activation function was chosen
to be sigmoid. In light of the above discussion one can consider G M :::::i a
MC j hence the mean-field equations can be stated as In this paper we will restrict
ourselves to M 2. The relevant objective functions for a general C is given by
All these objective functions can be expressed as a function of u. 4 Experimental
results To test the approximation schemes developed in the previous schemes, numerical
experiments were conducted. Saul et al.[l] pioneered the application of mean-field
theory to BNs. We will refer to their method as the SJJ approach. We compare our
schemes with the SJ J approach. Small Networks were chosen so that In Z can be
computed by exact enumeration for evaluation purposes. For all the experiments
the network topology was fixed to the one shown in figure 1. This choice of the
network enables us to compare the results with those of [1]. To compare the performance
of our methods with their method we repeated the experiment conducted by them
for sigmoid BNs. Ten thousand networks were generated by randomly choosing weight
values in [-1,1]. The bottom layer units, or the visible units of each network
were instantiated to zero. The likelihood, In Z, was computed by exact enumeration
of all the states in the higher two layers. The approximate value of - In Z was
computed by a MC j U was computed by solving the fixed point equations obtained
from (19). The goodness of approximation scheme was tested by the following measure
For a proper comparison we also implemented the SJJ method. The goodness of approximation
for the SJ J scheme is evaluated by substituting a MC, in (22) by Lsapprox, for
specific formula see [1]. The results are presented in the form of histograms
in Figure 2. We also repeated the experiment with weights and () () small weights
[-1, 1] large weights [-5,5] Table 1: Mean of for randomly generated sigmoid
networks, in different weight ranges. biases taking values between -5 and 5, the
results are again presented in the form of histograms in Figure 3. The findings
are summarized in the form of means tabulated in Table l. For small weights G12
and the SJJ approach show close results, which was expected. But the improvement
achieved by the G22 scheme is remarkable; it gave a mean value of 0.0029 which
compares substantially well against the mean value of 0.01139 reported in [6].
The improvement in [6] was achieved by using mixture distribution which requires
introduction of extra variational variables; more than 100 extra vari ational
variables are needed for a 5 component mixture. This results in substantial increase
in the computation costs. On the other hand the extra computational cost for G22
over G12 is marginal. This makes the G22 scheme computationally attractive over
the mixture distribution. Figure 2: Histograms for GlO and SJJ scheme for weights
taking values in [-1,1], for sigmoid networks. The plot on the left show histograms
for for the schemes Gu and G12 They did not have any overlaps; Gu , gives a mean
of -0.040 while G12 gives a mean of 0.0155. The middle plot shows the histogram
for the SJJ scheme, mean is given by 0.0157.The plot at the extreme right is for
the scheme G22 , having Of the three schemes G12 is the most robust and also yields
reasonably accurate results. It is outperformed only by G22 in the case of sigmoid
networks with low weights. Empirical evidence thus suggests that the choice of
a scheme is not straight forward and depends on the activation function and also
parameter values. Figure 3: Histograms for the G10 and SJJ schemes for weights
taking values in [-5,5] for sigmoid networks. The leftmost histogram shows for
G11 scheme having a mean of -0.0440, second from left is for G12 scheme having
a mean of 0.0231, and second from right is for SJJ scheme, having a mean of 0.0962.
The scheme G22 is at the extreme right with mean -0.0456. 5 Discussion Application
of Plefka''s theory to BNs is not straightforward. It requires compu tation of
some averages which are not tractable. We presented a scheme in which the BN energy
function is approximated by a Taylor series, which gives a tractable approximation
to the terms required for Plefka''s method. Various approximation schemes depending
on the degree of the Taylor series expansion are derived. Unlike the approach
in [1], the schemes discussed here are simpler as they do not introduce extra
variational variables. Empirical evaluation on small scale networks shows that
the quality of approximations is quite good. For a more detailed discussion of
these points see [7]. References [1] Saul, L . K. and Jaakkola, T. and Jordan,
M. 1.(1996), Mean field theory for sigmoid belief networks, Journal of Artificial
Intelligence Research,4 [2] Plefka, T . (1982), Convergence condition of the TAP
equation for the Infinite-ranged Ising glass model,J. Phys. A: Math. Gen.,15 [3]
Kappen, H. J and Rodriguez, F. B(1998), Boltzmann machine learning using mean
field theory and linear response correction, Advances in Neural Information Process
ing Systems 10, (eds.) M. I. Jordan and M. J. Kearns and S. A. Solla, MIT press
[4] Georges, A. and Yedidia, J. S.(1991), How to expand around mean-field theory
using high temperature expansions,J. Phys. A: Math. Gen., 24 [5] Bhattacharyya,
C. and Keerthi, S. S.(2000), Information geometry and Plefka''s mean field theory,
J. Phys. A: Math. Gen.,33 [6] Bishop, M. C. and Lawrence, N. and Jaakkola, T.
and Jordan, M. 1.(1997), Approxi mating Posterior Distributions in Belief Networks
using Mixtures, Advances in Neural Information Processing Systems 10, (eds.) Jordan,
M. I. and Kearns, M. J. and Solla, S., MIT press [7] Bhattacharyya, C. and Keerthi,
S. S. (1999), Mean field theory for a special class of belief networks, accepted
in Journal of Artificial Intelligence Research'
- 'Introduction It is known that auditory neurons are tuned for a number of independent
feature parameters of simple stimuli including frequency (Merzenich et al., 1973),
intensity (Sutter and Schreiner, 1995), amplitude modulation (Schreiner and Urbas,
1988), and Cha racterizing Auditory Cortical Ne urons Using Reverse Co rrelation
125 others. In addition, auditory cortical responses to multiple stimuli can enhance
or sup press one another in a time dependent fashion (Brosch and Schreiner, 1997;
Phillips and Cynader, 1985; Shamma and Symmes, 1985), and auditory cortical neurons
can be highly selective for species-specific vocalizations (Wang et al., 1995;
Wollberg and Newman, 1972), suggesting complex acoustic processing by these cells.
It is not yet known if these many independent selectivities of auditory cortical
neurons reflect a discernible underlying pattern of feature decomposition, as
has often been suggested (Merzenich et al., 1985; Schreiner and Mendelson, 1990;
Wang et al., 1995). Further, since sustained firing rate responses in the auditory
cortex to tonal stimuli are typ ically much lower than visual responses to drifting
bars (deCharms and Merzenich, 1996b), it has been suggested that the preferred
type of auditory stimulus may still not be known (Nelken et al., 1994). We sought
to develop an unbiased method for determining the full feature selectivity of
auditory cortical neurons, whatever it might be, in frequency and time based upon
reverse correlation. 2 Methods Recordings were made from a chronic array of up
to 49 individually placed ultra fine extracellular Iridium microelectrodes, placed
in the primary auditory cortex of the adult owl monkey. The electrodes had tip
lengths of 10-25microns, which yield impedance values of .5-SMOhm and good isolation
of signals from individual neurons or clusters of nearby neurons. We electrochemically
activated these tips to add an ultramicroscopic coating of Iridium Oxide, which
leaves the tip geometry unchanged, but decreases the tip impedance by more than
an order of magnitude, resulting in substantially improved recording signals.
These signals are filtered from .3-8kHz, sampled at 20kHz, digitized, and sorted.
The stimuli used were a variant of random V lsuII Cortn: Reveree Correlltlon U.lng
2D VI.nl Pltternl In Time SplkeT .. ln. Spltlotemporal Receptive Field Auditory
Cortex: Rever.e Correlltlon U.lng 1D Auditory Pltternl (Chordl) In Tim. Spectrotempoul
Receptive Field Figure 1: Schematic of stimuli used for reverse correlation. white
noise which was designed to allow us to characterize the responses of neurons
in time and in frequency. As shown in figure 1, these stimuli are directly analogous
to stimuli that have been used previously to characterize the response properties
of neurons in the primary visual cortex (Jones and Palmer, 1987; Reid and Alonso,
1995; Reid et al., 1991). In the visual case, stimuli consist of spatial checkerboards
that span some portion of the two-dimensional visual field and change pattern
with a short sampling interval. In the auditory case, which we have studied here,
the stimuli chosen were randomly selected chords, which approximately evenly span
a 126 R C. deChann s and M M. Merzenich portion of the one-dimensional receptor
surface of the cochlea. These stimuli consist of combinations of pure tones, all
with identical phase and all with 5 msec cosine shaped ramps in amplitude when
they individually turn on or off. Each chord was created by randomly selecting
frequency values from 84 possible values which span 7 octaves from 110Hz to 14080Hz
in even semitone steps. The density of tones in each stimulus was 1 tone per octave
on average, or 7 tones per chord, but the stimuli were selected stochastically
so a given chord could be composed of a variable number of tones of randomly selected
frequencies. We have used sampling rates of 10-100 chordssecond, and the data
here are from stimuli with 50 chordssecond. Stimuli with random, asynchronous
onset times of each tone produce similar results. These stimuli were presented
in the open sound field within an acoustical isolation cham ber at 44. 1kHz sampling
rate directly from audio compact disk, while the animal sat passively in the sound
field or actively performed an auditory discrimination task, receiving occasional
juice rewards. The complete characterization set lasted for ten minutes, thereby
including 30,000 individual chords. Spike trains were collected from mUltiple
sites in the cortex simultaneously during the presentation of our characterization
stimulus set, and individually reverse correlated with the times of onset of each
of the tonal stimuli. The reverse correlation method computes the number of spikes
from a neuron that were detected, on average, during a given time preceding, during,
or following a particular tonal stimulus component from our set of chords. These
values are presented in spikess for all of the tones in the stimulus set, and
for some range of time shifts. This method is somewhat analogous in intention
to a method developed earlier for deriving spectrotemporal receptive fields for
auditory midbrain neurons (Eggermont et al., 1983), but previous methods have
not been effective in the auditory cortex. 3 Results Figure 2 shows the spectrotemporal
responses of neurons from four locations in the primary auditory cortex. In each
panel, the time in milliseconds between the onset of a particular stimulus component
and a neuronal spike is shown along the horizontal axis. Progressively greater
negative time shifts indicate progressively longer latencies from the onset of
a stimulus component until the neuronal spikes. The frequency of the stimulus
component is shown along the vertical axis, in octave spacing from a 110Hz standard,
with twelve steps per octave. The brightness corresponds to the average rate of
the neuron, in spks, driven by a particular stimulus component . The reverse-correlogram
is thus presented as a stimulus triggered spike rate average, analogous to a standard
peristimulus time histogram but reversed in time, and is identical to the spectrogram
of the estimated optimal stimulus for the cell (a spike triggered stimulus average
which would be in units of mean stimulus denSity). A minority of neurons in the
primary auditory cortex have spectrotemporal recep tive fields that show only
a single region of increased rate, which corresponds to the traditional characteristic
frequency of the neuron, and no inhibitory region. We have found that cells of
this type (less than 10, not shown) are less common than cells with multimodal
receptive field structure. More commonly, neurons have regions of both increased
and decreased firing rate relative to their mean rate within their re ceptive
fields. For terminological convemence, these will be referred to as excitatory
and inhibitory regions, though these changes in rate are not diagnostic of an
under lying mechanism. Neurons with receptive fields of this type can serve as
detectors of stimulus edges in both frequency space, and in time. The neuron shown
in figure 2a has a receptive field structure indicative of lateral inhibition
in frequency space. This cell prefers a very narrow range of frequencies, and
decreases its firing rate for nearby frequencies, giving the characteristic of
a sharply-tuned bandpass filter. This Characterizing Auditory Cortical Neurons
Using Reverse Correlation 127 Figure 2: Spectrotemporal receptive fields of neurons
in the primary auditory cortex of the awake primate. These receptive fields are
computed as described in methods. Receptive field structures read from left to
right correspond to a preferred stimulus for the neuron, with light shading indicating
more probable stimulus components to evoke a spike, and dark shading indicating
less probable components. Receptive fields read from right to left indicate the
response of the neuron in time to a particular stimulus component. The colorbars
correspond to the average firing rates of the neurons in Hz at a given time preceding,
during, or following a particular stimulus component. type of response is the
auditory analog of a visual or tactile edge detector with lateral inhibition.
Simple cells in the primary visual cortex typically show similar patterns of center
excitation along a short linear segment, surrounded by inhibition (Jones and Palmer,
1987;Reid and Alonso, 1995; Reid et al., 1991). The neuron shown in figure 2b
shows a decrease in firing rate caused by a stimulus frequency which at a later
time causes an increase in rate. This receptive field structure is ideally suited
to detect stimulus transients; and can be thought of as a detector of temporal
edges. Neurons in the auditory cortex typically prefer this type of stimulus,
which is initially soft or silent and later loud. This corresponds to a neuronal
response which shows an increase followed by a decrease in firing rate. This is
again analogous to neuronal responses in the primary visual cortex, which also
typically show a firing rate pat tern to an optimal stimulus of excitation followed
by inhibition, and preference for stimulus transients such as when a stimulus
is first off and then comes on. The neuron shown in figures 2c shows an example
which has complex receptive field structure, with multiple regions. Cells of this
type would be indicative of selectiv ity for feature conjunctions or quite complex
stimuli, perhaps related to sounds in the animal''s learned environment. Cells
with complex receptive field structures are common in the awake auditory cortex,
and we are in the process of quantifying the percentages of cells that fit within
these different categories. Neurons were observed which respond with increased
rate to one frequency range at one time, and a different frequency range at a
later time, indicative of selectivity for frequency modulations(Suga, 1965). Regions
of decreased firing rate can show similar patterns. The neuron shown in figure
2d is an example of this type. This pattern is strongly analogous to motion energy
detectors in the visual system (Adelson and Bergen, 1985), which detect stimuli
moving in space, and these cells are selective for changes in frequency. 128 R.
C. deCharms and M M. Merzenich 2 octsec 6 octsec 10 octsec 14 octsec 30 octsec
100 octsec 2 octsec 6 octsec 10 octsec 14 octsec 30 octsec 100 octsec Figure 3:
Parametric stimulus set used to explore neuronal responses to continuously changing
stimulus frequency. Images axe spectrograms of stimuli from left to right in time,
and spanning seven octaves of frequency from bottom to top. Each stimulus is one
second. Numbers indicate the sweep rate of the stimuli in octaves per second.
Based on the responses shown, we wondered whether we could find a more optimal
class of stimuli for these neuron, analogous to the use of drifting bars or gratings
in the primary visual cortex. We have created auditory stimuli which correspond
exactly to the preferred stimulus computed for a paxticulax cell from the cell''s
spectrotemporal receptive field (manuscript in prepaxation), and we have also
designed a paxametric class of stimuli which are designed to be particularly effective
for neurons selective for stimuli of changing amplitude or frequency, which are
presented here. The stimuli shown in figure 3 are auditory analogous of visual
drifting grating stimuli. The stimuli axe shown as spectrograms, where time is
along the horizontal axis, frequency content on an octave scale is along the vertical
axis, and brightness corresponds to the intensity of the signal. These stimuli
contain frequencies that change in time along an octave frequency scale so that
they repeatedly pass approximately linearly through a neurons receptive field,
just as a drifting grating would pass repeatedly through the receptive field of
a visual neuron. These stimuli axe somewhat analogous to drifting ripple stimuli
which have recently been used by Kowalski, et.al. to characterize the linearity
of responses of neurons in the anesthetized ferret auditory cortex (Kowalski Neurons
in the auditory cortex typically respond to tonal stimuli with a brisk onset response
at the stimulus transient, but show sustained rates that axe far smaller than
found in the visual or somatosensory systems (deCharms and Merzenich, 1996a).
We have found neurons in the awake animal that respond with high firing rates
and significant selectivity to the class of moving stimuli shown in figure 3.
An outstanding example of this is shown in figure 4. The neuron in this example
showed a very high sustained firing rate to the optimal drifting stimulus, as
high as 60 Hz for one second. The neuron shown in this example also showed considerable
selectivity for stimulus velocity, as well as some selectivity for stimulus direction.
4 Conclusions These stimuli enable us to efficiently quantify the response characteristics
of neu rons in the awake primaxy auditory cortex, as well as producing optimal
stimuli for particular neurons. The data that we have gathered thus far extend
our knowledge about the complex receptive field structure of cells in the primary
auditory cortex, Cha racterizing Auditory Cortical Ne urons Using Reverse Correlation
129 2 octsec 6 octsec 10 octIsec 14 octsec 30 octsec 100 octsec -2 octsec -6 octsec
-10 octsec -14 octsec -30 octsec -100 octsec Figure 4: Responses of a neuron in
the primary auditory cortex of the awake pri mate to example stimuli take form
our characterization set, as shown in figure 3. In each panel, the average response
rate histogram in spikes per second is shown below rastergrams showing the individual
action potentials elicited on,each of twenty trials. and show some considerable
analogy with neurons in the primary visual cortex. In addition, they indicate
that it is possible to drive auditory cortical cells to high rates of sustained
firing, as in the visual cortex. This method will allow a number of future questions
to be addressed. Since we have recorded many neurons simultaneously, we are interested
in the interactions among large populations of neurons and how these relate to
stimuli. We are also recording responses to these stimuli while monkeys are performing
cognitive tasks involving attention and learning, and we hope that this will give
us insight into the effects on cell selectivity of the context provided by other
stimuli, the animal''s behavioral state or awareness of the stimuli, and the animal''s
prior learning of stimulus sets. 5 References Adelson EH, Bergen JR (1985) Spatiotemporal
energy models for the perception of Brosch M, Schreiner CE (1997) Time course
of forward masking tuning curves in cat primary auditory cortex. J Neurophysiol,
77, 923-43. deCharms Re, Merzenich MM (1996a) Primary cortical representation
of sounds by the coordination of action-potential timing. Nature, 381, 610-3.
deCharms RC , Merzenich MM (1996b) Primary cortical representation of sounds by
the coordination of action-potential timing. Nature, 381, 610-613. EggeI1I).ont
JJ, Aertsen AM, Johannesma PI (1983) Quantitative characterisation procedure for
auditory neurons based on the spectro-temporal receptive field. Hear Hubel DH,
Wiesel TN (1962) Receptive fields, binocular interaction and functional archtecture
in the cat''s visual cortex. J. Physiol., 160, 106-154. Jones JP, Palmer LA (1987)
The two-dimensional spatial structure of simple receptive 130 R. C. deCharms and
M M . Merzenich fields in cat striate cortex. J Neurophysiol, 58, 1187-211. Kowalski
N, Depireux DA, Shamma SA (1996a) Analysis of dynamic spectra in ferret primary
auditory cortex. I. Characteristics of single-unit responses to moving ripple
spectra. J Neurophysiol, 76, 3503-23. Kowalski N, Depireux DA, Shamma SA (1996b)
Analysis of dynamic spectra in fer ret primary auditory cortex. II. Prediction
of unit responses to arbitrary dynamic spectra. J Neurophysiol, 76, 3524-34. Merzenich
MM, Jenkins WM, Middlebrooks JC (1985) Observations and hypotheses on special
organizational features of the central auditory nervous system. In: Dy namic Aspects
of Neocortical Function. Edited by E. G. a. W. M . C. G. Edelman . Merzenich MM,
Knight PL, Roth GL (1973) Cochleotopic organization of primary auditory cortex
in the cat. Brain Res, 63, 343-6. Nelken I, Prut Y, Vaadia E, Abeles M (1994)
In search of the best stimulus: an optimization procedure for finding efficient
stimuli in the cat auditory cortex. Hear Phillips DP, Cynader MS (1985) Some neural
mechanisms in the cat''s auditory cortex underlying sensitivity to combined tone
and wide-spectrum noise stimuli. Hear Res, Reid RC, Alonso JM (1995) Specificity
of monosynaptic connections from thalamus to visual cortex. Nature, 378,281-4.
Reid RC, Soodak RE, Shapley RM (1991) Directional selectivity and spatiotemporal
structure of receptive fields of simple cells in cat striate cortex. J Neurophysiol,
66, Ringach DL, Hawken MJ, Shapley R (1997) Dynamics of orientation tuning in
macaque primary visual cortex. Nature, 387, 281-4. Schreiner CE, Mendelson JR
(1990) Functional topography of cat primary auditory cortex: distribution of integrated
excitation. J Neurophysiol, 64, 1442-59. Schreiner CE, Urbas JV (1988) Representation
of amplitude in the auditory cortex of the cat. II. Comparison between cortical
fields. Hear. Res., 32, 49-64. Shamma SA, Symmes D (1985) Patterns of inhibition
in auditory cortical cells in awake squirrel monkeys. Hear Res, 19, 1-13. Suga
N (1965) Responses of cortical auditory neurones to frequency modulated sounds
in echo-locating bats. Nature, 206, 890-l. Sutter ML, Schreiner CE (1995) Topography
of intensity tuning in cat primary au ditory cortex: single-neuron versus multiple-neuron
recordings. J Neurophysiol, 73, Wang X, Merzenich MM, Beitel R, Schreiner CE (1995)
Representation of a species specific vocalization in the primary auditory cortex
of the common marmoset: tem poral and spectral characteristics. J Neurophysiol,
74, 2685-706. Wollberg Z, Newman JD (1972) Auditory cortex of squirrel monkey:
response pat terns of single cells to species-specific vocalizations. Science,
175, 212-214.'
- source_sentence: Enhanced learning efficiency through input redundancy cancellation
in neural networks
sentences:
- 'INTRODUCTION Learning problems involving sequentially structured data cannot
be effectively dealt with static models such as feedforward networks. Recurrent
networks allow to model complex dynamical systems and can store and retrieve contextual
information in a flexible way. Up until the present time, research efforts of
supervised learning for recurrent networks have almost exclusively focused on
error minimization by gradient descent methods. Although effective for learning
short term memories, practical difficulties have been reported in training recurrent
neural networks to perform tasks in which the temporal contingencies present in
the inputoutput sequences span long intervals (Bengio et al., 1994; Mozer, 1992).
Previous work on alternative training algorithms (Bengio et al., 1994) could suggest
that the root of the problem lies in the essentially discrete nature of the process
of storing information for an indefinite amount of time. Thus, a potential solution
is to propagate, backward in time, targets in a discrete state space rather than
differential error information. Extending previous work (Bengio Frasconi, 1994a),
in this paper we propose a statistical approach to target propagation, based on
the EM algorithm. We consider a parametric dynamical system with discrete states
and we introduce a modular architecture, with subnetworks associated to discrete
states. The architecture can be interpreted as a statistical model and can be
trained by the EM or generalized EM (GEM) algorithms (Dempster et al., 1977),
considering the internal state trajectories as missing data. In this way learning
is decoupled into also, ATT Bell Labs, Holmdel, N J 07733 428 Yoshua Bengio, Paolo
Frasconi a temporal credit assignment subproblem and a static learning subproblem
that consists of fitting parameters to the next-state and output mappings defined
by the estimated trajectories. In order to iteratively tune parameters with the
EM or GEM algorithms, the system propagates forward and backward a discrete distribution
over the n states, resulting in a procedure similar to the Baum- Welch algorithm
used to train standard hidden Markov models (HMMs) (Levinson et al., 1983). HMMs
however adjust their parameters using unsupervised learning, whereas we use EM
in a supervised fashion. Furthermore, the model presented here could be called
InputOutput HMM , or IOHMM , because it can be used to learn to map input sequences
to output sequences (unlike standard HMMs, which learn the output sequence distribution).
This model can also be seen as a recurrent version of the Mixture of Experts architecture
(Jacobs et al., 1991), related to the model already proposed in (Cacciatore and
Nowlan, 1994). Experiments on artificial tasks (Bengio Frasconi, 1994a) have shown
that EM recurrent learning can deal with long term dependencies more effectively
than backpropaation through time and other alternative algorithms. However, the
model used in (Bengio Frasconi, 1994a) has very limited representational capabilities
and can only map an input sequence to a final discrete state. In the present paper
we describe an extended architecture that allows to fully exploit both input and
output portions of the data, as required by the supervised learning paradigm .
In this way , general sequence processing tasks, such as production, classification,
or prediction, can be dealt with. 2 THE PROPOSED ARCHITECTURE We consider a discrete
state dynamical system based on the following state space description: x - f(x
U ) where Ut E R m is the input vector at time t, Yt E R r is the output vector,
and Xt E {I, 2, ... , n} is a discrete state. These equations define a generalized
Mealy finite state machine, in which inputs and outputs may take on continuous
values. In this paper, we consider a probabilistic version of these dynamics,
where the current inputs and the current state distribution are used to estimate
the state distribution and the output distribution for the next time step. Admissible
state transitions will be specified by a directed graph 9 whose vertices correspond
to the model ''s states and the set of successors for state j is Sj. Th e system
defined by equations (1) can be modeled by the recurrent architecture depicted
in Figure l(a). The architecture is composed by a set of state networks N j, j 1
... n and a set of output networks OJ, j 1 ... n. Each one of the state and output
networks is uniquely associated to one of the states,and all networks share the
same input Ut . Each state network M has the task of predicting the next state
distribution, based on the current input and given that Xt-l j. Similarly, each
output network OJ predicts the output of the system, given the current state and
input. All the subnetworks are assumed to be static and they are defined by means
of smooth mappings Nj (Ut; 9j) and OJ (Ut; 1J j), where 9 j and 1J j are vectors
of adjustable parameters (e.g., connection weights). The ranges of the functions
N j 0 may be constrained in order to account for the underlying transition graph
9. Each output ''Pij,t of the state subnetwork N j (at time t) is associated to
one of the successors i of state j. Thus the last layer of M has as many units
as the cardinality of Sj. For convenience of notation, we suppose that ''Pij,t
are defined for each i, j 1, ... , n and we impose the condition ''Pij,t 0 for
each i not belonging to S j. The softmax function is used in the last layer: ''Pij,t e
a,j,t ILlEsj ea lj,t, j 1, ... , n , i E Sj where aij,t are intermediate variables
that can be thought of as the An Input Output HMM Architecture current pectod
output, given PIlat Input Mquenc. current atilt. dlatrtbutton IOHMM Figure 1:
(a): The proposed IOHMM architecture. (b): Bottom: Bayesian network expressing
conditional dependencies for an IOHMM; top: Bayesian network for a standard HMM
activations of the output units of subnetwork N j. In this way L:71 ''Pij,t 1
Tij,t. The vector ''t E R n represents the internal state of the model and it
is computed as a linear combination of the outputs of the state networks, gated
by the previously computed internal state: n output of the system 1Jt E R r :
Output networks compete to predict the global where 1Jjt E R r is the output of
subnetwork OJ. At this level, we do not need to further specify the internal architecture
of the state and output subnetworks. Depending on the task, the designer may decide
whether to include hidden layers and what activation rule to use for the hidden
units. This connectionist architecture can be also interpreted as a probability
model. Let us assume a multinomial distribution for the state variable Xt and
let us consider ''t, the main variable of the temporal recurrence (2). If we initialize
the vector ''0 to positive numbers summing to 1, it can be interpreted as a vector
of initial state probabilities. In general, we obtain relation (it P(Xt i I
un, having denoted with ui the subsequence of inputs from time 1 to t, inclusively.
Equation (2) then has the following probabilistic interpretation: i.e., the subnetworks
N j compute transition probabilities conditioned on the input As in neural networks
trained to minimize the output squared error, the output 1Jt of this architecture
can be interpreted as an expected "position parameter" for the probability distribution
of the output Yt. However, in addition to being conditional on an input Ut, this
expectation is also conditional on the state Xt, i.e. 430 Yoshua Bengio, Paolo
Frasconi 7]t E[Yt I Xt,Ut]. The actual form of the output density, denoted !Y(Yt;7]t),
will be chosen according to the task. For example a multinomial distribution is
suitable for sequence classification, or for symbolic mutually exclusive outputs.
Instead, a Gaussian distribution is adequate for producing continuous outputs.
In the first case we use a softmax function at the output of subnetworks OJ; in
the second case we use linear output units for the subnetworks O J. In order to
reduce the amount of computation, we introduce an independency model among the
variables involved in the probabilistic interpretation of the architecture. We
shall use a Bayesian network to characterize the probabilistic dependencies among
these variables. Specifically, we suppose that the directed acyclic graph 9 depicted
at the bottom of Figure 1 b is a Bayesian network for the dependency model associated
to the variables u I, xI, YI. One of the most evident consequences of this independency
model is that only the previous state and the current input are relevant to determine
the next-state. This one-step memory property is analogue to the Markov assumption
in hidden Markov models (HMM). In fact, the Bayesian network for HMMs can be obtained
by simply removing the Ut nodes and arcs from them (see top of Figure Ib). 3 A
SUPERVISED LEARNING ALGORITHM The learning algorithm for the proposed architecture
is derived from the maximum likelihood principle. The training data are a set
of P pairs of input output sequences (of length Tp): 1) {(uip(p),Yip(p));p 1
... P}. Let J denote the vector of parameters obtained by collecting all the parameters
(Jj and iJi of the architecture. The likelihood function is then given by The
output values (used here as targets) may also be specified intermittently. For
example, in sequence classification tasks, one may only be interested in the out
put YT at the end of each sequence. The modification of the likelihood to account
for intermittent targets is straightforward. According to the maximum likelihood
principle, the optimal parameters are obtained by maximizing (6). In order to
apply EM to our case we begin by noting that the state variables Xt are not ob
served. Knowledge of the model''s state trajectories would allow one to decompose
the temporal learning problem into 2n static learning subproblems. Indeed, if
Xt were known, the probabilities (it would be either 0 or 1 and it would be possible
to train each subnetwork separately, without taking into account any temporal
de pendency. This observation allows to link EM learning to the target propagation
approach discussed in the introduction. Note that if we used a Viterbi-like approxi
mation (i.e., considering only the most likely path), we would indeed have 2n
static learning problems at each epoch. In order to we derive the learning equations,
let us define the complete data as 1)c {(uiP(p),yiP(p),xiP(p));p 1 ... P}. The
corresponding complete data l-likelihood is Since lc( J; 1)c) depends on the hidden
state variables it cannot be maximized di rectly. The MLE optimization is then
solved by introducing the auxiliary function Q(J; 0) and iterating the following
two,steps for k 1, 2r ... :, Estimation: Compute Q(J; J) E[lc(J; 1)c) 1), J]
Maximization: Update the parameters as 0 t-arg maxJ Q( J; 0) (8) An Input Output
HMM Architecture 431 The expectation of (7) can be expressed as where hij,t EIZitzj,t-l
I uf, yf; 0J, denoting Zit for an indicator variable 1 if Xt i and 0 otherwise.
The hat in (it and hij,t means that these variables are computed using the "old"
parameters 0 . In order to compute hij,t we introduce the forward probabilities
Qit P(YL Xt i; uD and the backward probabilities f3it p(yf I Xt i, un, that
are updated as follows: Each iteration of the EM algorithm requires to maximize
Q(0 ; 0). We first consider a simplified case, in which the inputs are quantized
(i.e., belonging to a finite alphabet {0"1,"" O"K}) and the subnetworks behave
like lookup ta bles addressed by the input symbols O"t, i.e. we interpret each
parameter as W i''k P(Xt i I Xt-l j,O"t k). For simplicity, we restrict the
analysis to clas sification tasks and we suppose that targets are specified as
desired final states for each sequence. Furthermore, no output subnetworks are
used in this particular application of the algorithm. In this case we obtain the
reestimation formulae: In general, however, if the subnetworks have hidden sigmoidal
units, or use a soft max function to constrain their outputs to sum to one, the
maximum of Q cannot be found analytically. In these cases we can resort to a GEM
algorithm, that sim ply produces an increase in Q, for example by gradient ascent.
In this case, the derivatives of Q with respect to the parameters can be easily
computed as follows. Let Ojlt be a generic weight in the state subnetwork N j.
From equation (9): where the partial derivatives :e;t can be computed using backpropagation.
Sim ilarly, denoting with t''Jik a generic weight of the output subnetwork Oi,
we have: where ;;:t are also computed using backpropagation. Intuitively, the
parameters are updated as if the estimation step of EM had provided targets for
the outputs of the 2n subnetworks, for each time t. Although GEM algorithms are
also guaranteed to find a local maximum of the likelihood, their convergence may
be significantly slower compared to EM. In several experiments we noticed that
convergence can be accelerated with stochastic gradient ascent. 432 Yoshua Bengio,
Paolo Frasconi 4 COMPARISONS It appears natural to find similarities between the
recurrent architecture described so far and standard HMMs (Levinson et al., 1983).
The architecture proposed in this paper differs from standard HMMs in two respects:
computing style and learning. With IOHMMs, sequences are processed similarly to
recurrent networks, e.g., an input sequence can be synchronously transformed into
an output sequence. This computing style is real-time and predictions of the outputs
are available as the input sequence is being processed. This architecture thus
allows one to implement all three fundamental sequence processing tasks: production,
prediction, and classification. Finally, transition probabilities in standard
HMMs are fixed, i.e. states form a homogeneous Markov chain. In IOHMMs, transition
probabilities are conditional on the input and thus depend on time, resulting
in an inhomogeneous Markov chain. Consequently, the dynamics of the system (specified
by the transition probabilities) are not fixed but are adapted in time depending
on the input sequence. The other fundamental difference is in the learning procedure.
While interesting for their capabilities of modeling sequential phenomena, a major
weakness of stan dard HMMs is their poor discrimination power due to unsupervised
learning. An approach that has been found useful to improve discrimination in
HMMs is based on maximum mutual information (MMI) training. It has been pointed
out that supervised learning and discriminant learning criteria like MMI are actually
strictly related (Bridle, 1989). Although the parameter adjusting procedure we
have defined is based on MLE, yf is used as desired output in response to the
input uf, resulting in discriminant supervised learning. Finally, it is worth
mentioning that a number of hybrid approaches have been proposed to integrate
connectionist approaches into the HMM frame''Vork. For example in (Bengio et al.,
1992) the observations used by the HMM are generated by a feedforward neural network.
In (Bourlard and Wellekens, 1990) a feedforward network is used to estimate state
probabilities, con ditional to the acoustic sequence. A common feature of these
algorithms and the one proposed in this paper is that neural networks are used
to extract temporally local information whereas a Markovian system integrates
long-term constraints. We can also establish a link between IOHMMs and adaptive
mixtures of experts (ME) (Jacobs et al., 1991). Recently, Cacciatore Nowlan (1994)
have proposed a recurrent extension to the ME architecture, called mixture of
controllers (MC), in which the gating network has feedback connections, thus allowing
to take temporal context into account. Our IOHMM architecture can be interpreted
as a special case of the MC architecture, in which the set of state subnetworks
play the role of a gating network having a modular structure and second order
connections. 5 REGULAR GRAMMAR INFERENCE In this section we describe an application
of our architecture to the problem of grammatical inference. In this task the
learner is presented a set of labeled strings and is requested to infer a set
of rules that define a formal language. It can be considered as a prototype for
more complex language processing problems. However, even in the "simplest" case,
i.e. regular grammars , the task can be proved to be NP-complete (Angluin and
Smith, 1983). We report experimental results on a set of regular grammars introduced
by Tomita (1982) and afterwards used by other researchers to measure the accuracy
of inference methods based on recurrent networks (Giles et al., 1992; Pollack,
1991; Watrous and Kuhn , 1992). We used a scalar output with supervision on the
final output YT that was modeled as a Bernoulli variable fy (YT ; 7]T) 7]T (1
- 7] ) l-YT, with YT 0 if the string is rejected and YT 1 if it is accepted.
In tbis application we did not apply An Input Output HMM Architecture 433 Table
1: Summary of experimental results on the seven Tomita''s grammars. Grammar Sizes
Convergence Accuracies n FSA min Average Worst Best WK Best external inputs to
the output networks. This corresponds to modeling a Moore finite state machine
. Given the absence of prior knowledge about plausible state paths, we used an
ergodic transition graph (i.e., fully connected).In the experiments we measured
convergence and generalization performance using different sizes for the recurrent
architecture. For each setting we ran 20 trials with different seeds for the initial
weights. We considered a trial successful if the trained network was able to correctly
label all the training strings. The model size was chosen using a cross-validation
criterion based on performance on 20 randomly generated strings of length T ::;
12. For comparison, in Table 1 we also report for each grammar the number of states
of the minimal recognizing FSA (Tomita, 1982). We tested the trained networks
on a corpus of 213 - 1 binary strings of length T ::; 12. The final results are
summarized in Table 1. The column "Convergence" reports the fraction of trials
that succeeded to separate the training set. The next three columns report averages
and order statistics (worst and best trial) of the fraction of correctly classified
strings, measured on the successful trials. For each grammar these results refer
to the model size n selected by cross-validation. Generalization was always perfect
on grammars 1,4,5 and 6. For each grammar, the best trial also attained perfect
generalization. These results compare very favorably to those obtained with second-order
networks trained by gradient descent, when using the learning sets proposed by
Tomita. For comparison, in the last column of Table 1 we reproduce the results
reported by Watrous Kuhn (1992) in the best of five trials. In most of the successful
trials the model learned an actual FSA behavior with transition probabilities
asymptotically converging either to 0 or to 1. This renders trivial the extraction
of the corresponding FSA . Indeed, for grammars 1,4,5, and 6, we found that the
trained networks behave exactly like the minimal recognizing FSA . A potential
training problem is the presence of local maxima in the likelihood func tion.
For example, the number of converged trials for grammars 3, 4, and 5 is quite
small and the difficulty of discovering the optimal solution might become a serious
restriction for tasks involving a large number of states. In other experiments
(Ben gio Frasconi, 1994a), we noticed that restricting the connectivity of the
transition graph can significantly help to remove problems of convergence. Of
course, this ap proach can be effectively exploited only if some prior knowledge
about the state space is available. For example, applications of HMMs to speech
recognition always rely on structured topologies. 6 CONCLUSIONS There are still
a number of open questions. In particular, the effectiveness of the model on tasks
involving large or very large state spaces needs to be carefully eval uated. In
(Bengio Frasconi 1994b) we show that learning long term dependencies in these
models becomes more difficult as we increase the connectivity of the state 434
Yoshua Bengio, Paolo Frasconi transition graph. However, because transition probabilities
of IOHMMs change at each t, they deal better with this problem of long-term dependencies
than standard HMMs. Another interesting aspect to be investigated is the capability
of the model to successfully perform tasks of sequence production or prediction.
For example, interesting tasks that could also be approached are those related
to time series modeling and motor control learning. References Angluin, D. and
Smith, C. (1983). Inductive inference: Theory and methods. Com Bengio, Y. and
Frasconi, P. (1994a). Credit assignment through time: Alternatives to backpropagation.
In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information
Processing Systems 6. Morgan Kaufmann. Bengio, Y. and Frasconi, P. (1994b). An
EM Approach to Learning Sequential Behavior. Tech. Rep. RT-DSI11-94, University
of Florence. Bengio, Y., De Mori, R., Flammia, G., and Kompe, R. (1992). Global
optimization of a neural network-hidden markov model hybrid. IEEE Transactions
on Neural Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term
dependencies with gradient descent is difficult. IEEE Trans. Neural Networks,
5(2). Bourlard, H. and Wellekens, C. (1990). Links between hidden markov models
and multilayer perceptrons. IEEE Trans. Pattern An. Mach. Intell., 12:1167-1178.
Bridle, J. S. (1989). Training stochastic model recognition algorithms as net
works can lead to maximum mutual information estimation of parameters. In D .S.Touretzky,
ed., NIPS2, pages 211-217. Morgan Kaufmann. Cacciatore, T. W. and Nowlan, S. J.
(1994). Mixtures of controllers for jump linear and non-linear plants. In Cowan,
J. et. al., editors, Advances in Neural Information Processing Systems 6, San
Mateo, CA. Morgan Kaufmann. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977).
Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc.
B,39:1-38. Learning and extracting finite state automata with second-order recurrent
neu ral networks. Neural Computation, 4(3):393-405. Jacobs, R. A., Jordan, M.
I., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive mixture of local experts.
Neural Computation, 3:79-87. Levinson, S. E., Rabiner, L. R., and Sondhi, M. M.
(1983). An introduction to the application of the theory of probabilistic functIons
of a markov process to automatic speech recognition. Bell System Technical Journal,
64(4):1035-1074. Mozer, M. C. (1992). The induction of multiscale temporal structure.
In Moody, J. et. al., eds, NIPS 4 pages 275-282. Morgan Kaufmann. Pollack, J.
B. (1991). The induction of dynamical recognizers. Machine Learning, Tomita, M.
(1982). Dynamic construction of finite-state automata from examples using hill-climbing.
Proc. 4th Cog. Science Con!, pp. 105-108, Ann Arbor MI. Watrous, R. 1. and Kuhn,
G. M. (1992). Induction of finite-state languages using second-order recurrent
networks. Neural Computation, 4(3):406-414.'
- 'INTRODUCTION In many learning control problems, the evaluation used to modify
(and thus im prove) control may not be available in terms of the controller''s
output: instead, it may be in terms of a spatial transformation of the controller''s
output variables (in which case we shall term it as being "distal in space"),
or it may be available only several time steps into the future (termed as being
"distal in time"). For example, control of a robot arm may be exerted in terms
of joint angles, while evaluation may be in terms of the endpoint cartesian coordinates;
furthermore, we may only wish to evaluate the endpoint coordinates reached after
a certain period of time: the co- Current address: Computation and Neural Systems
Program, California Institute of Technology, Pasadena CA. 563 564 Brody ordinates
reached at the end of some motion, for instance. In such cases, supervised learning
methods are not directly applicable, and other techniques must be used. Here we
study one such technique (proposed for cases where the evaluation is distal in
both space and time by [Jordan Jacobs 90)), analyse a source of its problems,
and propose a simple solution for them which leads to fast, efficient learning.
We first describe two methods, and then combine them into the "predictive forward
modeling" technique with which we are concerned. 1.1 FORWARD MODELING "Forward
Modeling" [Jordan Rumelhart 90] is useful for dealing with evaluations which
are distal in space; it involves the construction of a differentiable model to
approximate the controller-action - evaluation transformation. Let our controller
have internal parameters w, output c, and be evaluated in space e, where e e(c)
is an unknown but well-defined transformation. If there is a desired output in
space e, called e, we can write an "error" function, that is, an evaluation we
wish minimised, and differentiate it w.r.t. the controller''s weights to obtain
Using a differentiable controller allows us to obtain the first factor in the
second equation, and the third factor is also known; but the second factor is
not. However, if we construct a differentiable model (called a ''''forward model")
of e(c), then we can obtain an approximation to the second term by differentiating
the model, and use this to obtain an estimate of the gradient 8E 8w through equation
(1); this can then be used for comparatively fast minimisation of E, and is what
is known as "forward modeling". 1.2 PREDICTIVE CRITICS To deal with evaluations
which are distal in time, we may use a "critic" network, as in [Barto, Sutton Anderson
83]. For a particular control policy implemented by the controller network, the
critic is trained to predict the final evaluation that will be obtained given
the current state - using, for example, Sutton''s TD algorithm [Sutton 88]. The
estimated final evaluation is then available as soon as we enter a state, and
so may in turn be used to improve the control policy. This approach is closely
related to dynamic programming [Barto, Sutton Watkins 89]. 1.3 PREDICTIVE FORWARD
MODELS While the estimated evaluation we obtain from the critic is no longer distal
in time, it may still be distal in space. A natural proposal in such cases, where
the evaluation signal is distal both in space and time, is to combine the two
techniques described above: use a differentiable model as a predictive critic
[Jordan Jacobs 90]. If we know the desired final evaluation, we can then proceed
as in equation (1) and obtain the gradient of the error w.r.t. the controller''s
weights. Schematically, this would look like figure 1. When using a backprop network
for the predictive model, state vector control CONTROLLER NETWORK Fast Learning
with Predictive Forward Models 565 predicted evaluation PREDICTIVE MODEL Figure
1: Jordan and Jacobs'' predictive forward modeling architecture. Solid lines indi
cate data paths, the dashed line indicates back propagation. we would backpropagate
through it, through it''s control input, and then into the controller to modify
the controller network. We should note that since predictions make no sense without
a particular control policy, and the controller is only modified through the predictive
model, both networks must be trained simultaneously. [Jordan Jacobs 90] applied
this method to a well-known problem, that of learn ing to balance an inverted
pendulum on a movable cart by exerting appropriate horizontal forces on the cart.
The same task, without differentiating the critic, was studied in [Barto, Sutton Anderson
83]. There, reinforcement learning methods were used instead to modify the controller''s
weights; these perform a search which in some cases may be shown to follow, on
average, the gradient of the expected evaluation w.r .t. the network weights.
Since differentiating the critic allows this gradient to be found directly, one
would expect much faster learning when using the architecture of figure 1. However,
Jordan and Jacobs'' results show precisely the opposite: it is surprisingly slow.
2 THE REDUNDANCY PROBLEM We can explain the above surprising result if we consider
the fact that the predictive model network has redundant inputs: the control vector
c is a function of the state vector; (call this c 17( S)). Let K. and u be the
number of components of the control and state vectors, respectively. Instead of
drawing its inputs from the entire volume of (K.u)-dimensional input space, the
predictor is trained only with inputs which lie on the u-dimensional manifold
defined by the relation 17. A way from the manifold the network is free to produce
entirely arbitrary outputs. Differentiation of the model will then provide non-arbitrary
gradients only for directions tangential to the manifold; this is a condition
that the axes of the control dimensions will not, in general, satisfy.l This observation,
which concerns any model trained with redundant inputs, is the very simple yet
principal point of this paper. One may argue that since the control policy is
continually changing, the redundancy picture sketched out here is not in fact
accurate: as the controller is modified, many lNote that if it is single-valued,
there is no way the manifold can "fold around" to cover all (or most) of the K. (T
input space. 566 Brody EVALUATION EVALUATION FUNCTION MODELS CONTROL OUTPUT Figure
2: The evaluation as a function of control action. Curves A,B,C,D represent possible
(wrong) estimates of the "real" curve made by the predictive model network. possible
control policies are "seen" by the predictor, so creating volume in input space
and leading to correct gradients obtained from the predictor. However, the way
in which this modification occurs is significant. An argument based on empirical
observations will be made to sustain this. Consider the example shown in figure
2. The graph shows what the "real" evaluation at some point in state space is,
as a function of a component of the control action taken at that pointj this function
is what the predictive network should approximate. Suppose the function implemented
by the predictive network initially looks like the curve which crosses the "real"
evaluation function at point (a)j suppose also that the current action taken also
corresponds to point (a). Here we see a one-dimensional example of the redundancy
problem: though the prediction at this point is entirely accurate, the gradient
is not. If we wish to minimise the predicted evaluation, we would change the action
in the direction of point (b). Examples of point (a) will no longer be presented
to the predictive network, so it could quite plausibly modify itself simply so
as to look like the estimated evaluation curve "B" which is shown crossing point
(b) (a minimal change necessary to continue being correct). Again, the gradient
is wrong and minimising the prediction will change the action in the same direction
as before, perhaps to point (c)j then to (d), and so on. Eventually, the prediction,
though accurate, will have zero gradient, as in curve "D", and no modifications
will occur. In practice, we have observed networks "getting stuck" in this fashion.
Though the objective was to minimise the evaluation, the system stops "learning"
at a point far from optimal. The problem may be solved, as Jordan and Jacobs did,
by introducing noise in the controller''s output, thus breaking the redundancy.
Unfortunately, this degrades .. [ vector control vector Fast Learning with Predictive
Forward Models 567 predicted predicted evaluation CONTROLLER NETWORK INTERMEDIATE
(WORLD) MODEL PREDICTIVE MODEL Figure 3: The proposed system architecture. Again,
solid lines represent data paths while the dashed line represents backpropagation
(or differentiation). signal quality and means that since we are predicting future
evaluations, we wish to predict the effects of future noise - a notoriously difficult
objective. The predictive network eventually outputs the evaluation''s expectation
value, but this can take a 3 USING AN INTERMEDIATE MODEL 3.1 AN EXTRA WORLD MODEL
Another way to solve the redundancy problem is through the use of what is here
called an "intermediate model": a model of the world the controller is interacting
with. That is, if 8(t) represents the state vector at time t, and c(t) the controller
output at time t, it is a model of the function 1 where 8(t 1) 1(8(t), c(t)).
This model is used as represented schematically in figure 3. It helps in modularising
the learning task faced by the predictive model [Chrisley 90], but more interestingly,
it need not be trained simultaneously with the controller since its output does
not depend on future control policy. Hence, it can be trained separately, with
examples drawn from its entire (state x action) input space, providing gradient
signals without arbitrary components when differentiated. Once trained, we freeze
the intermediate model''s weights and insert it into the system as in figure 3;
we then proceed to train the controller and predictive model as before. The predictive
model will no longer have redundant inputs when trained either, so it too will
provide correct gradient signals. Since all arbitrary components have been eliminated,
the speedup expected from using differentiable predictive models should now be
obtainable.2 3.2 AN EXAMPLE TASK The intermediate model architecture was tested
on the same example task as used by Jordan and Jacobs, that of learning to balance
a pole which is attached through a hinge on its lower end to a movable cart. The
control action is a real valued-force 2This same architecture was independently
proposed in [Werbos 90], but without the explanation as to why the intermediate
model is necessary instead of merely desirable. 568 Brody L.arninq trial Figure
4: The evolution of eight different learning networks, using the intermediate
model. applied to the cart; the evaluation signal is a "0" while the pole has
not fallen over, and the cart hasn''t reached the edge of the finite-sized tracks
it is allowed to move on, a "I" when either of these events happens. A trial is
then said to have failed, and terminates.3 We count the number of learning trials
needed before a controller is able to keep the pole balanced for a significant
amount of a time (measured in simulated seconds). Figure 4 shows the evolution
of eight networks; most reach balancing solutions within 100 to 300 faiulres.
(These successful networks came from a batch of eleven: the other three never
reached solutions.) This is 50 to 100 times faster than without the intermediate
model, where 5000 to 30000 trials were needed to achieve similar balancing times
[Jordan Jacobs 90]. We must now take into account the overhead needed to train
the intermediate model. This was done in 200 seconds of simulated time, while
training the whole system typically required some 400 seconds-the overhead is
small compared to the improvement achieved through the use of the intermediate
model. However, off-line training of the intermediate model requires an additional
agency to organise the selection and presentation of training examples. In the
real world, we would either need some device which could initialise the system
at any point in state space, or we would have to train through "flailing": applying
random control actions, over many trials, so as to eventually cover all possible
states and actions. As the dimensionality of the state representation rises for
larger problems, intermediate model training will become more difficult. 3The
differential equations which were used as a model of this system may be found
in [Barto, Sutton Anderson 83]. The parameters of the simulations were identical
to those used in [Jordan Jacobs 90]. Fast Learning with Predictive Forward Models
569 3.3 REMARKS We should note that the need for covering all state space is not
merely due to the requirement of training an intermediate model: dynamic-programming
based techniques such as the ones mentioned in this paper are guaranteed to lead
us to an optimal control solution only if we explore the entire state space during
learning. This is due to their generality, since no a priori structure of the
state space is assumed. It might be possible to interleave the training of the
intermediate model with the training of the controller and predictor networks,
so as to achieve both concurrently. High-dimensional problems will still be problematic,
but not just due to intermediate model training-the curse of dimensionality is
not easily avoided! 4 CONCLUSIONS If we differentiate through a model trained
with redundant inputs, we eliminate possible arbitrary components (which are due
to the arbitrary mixing of the inputs that the model may use) only if we differentiate
tangentially along the manifold defined by the relationship between the inputs.
For the architecture presented in [Jordan Jacobs 90], this is problematic, since
the axes of the control vector will typically not be tangential to the manifold.
Once we take this into account, it is clear why the architecture was not as efficient
as expected; and we can introduce an "intermediate" world model to avoid the problems
that it had. Using the intermediate model allows us to correctly obtain (through
backpropaga tion, or differentiation) a real-valued vector evaluation on the controller''s
output. On the example task presented here, this led to a 50 to 100-foid increase
in learn ing speed, and suggests a much better scaling-up performance and applicability
to real-world problems than simple reinforcement learning, where real-valued outputs
are not permitted, and vector control outputs would train very slowly. Acknowledgements
Many thanks are due to Richard Rohwer, who supervised the beginning of this project,
and to M. I. Jordan and R. Jacobs, who answered questions enlighteningly; thanks
are also due to Dr F. Bracho at lIMAS, UNAM, who provided the environ ment for
the project''s conclusion. This work was supported by scholarships from CON ACYT
in Mexico and from Caltech in the U.S. References [Ackley 88] D. H. Ackley, "Associative
Learning via Inhibitory Search", in D. S. Touretzky, ed., Advances in Neural Information
Processing Systems 1, Morgan Kaufmann 1989 [Barto, Sutton Anderson 83] A. G.
Barto, R. S. Sutton, and C. W. Anderson, "Neuronlike Adaptive Elements that can
Solve Difficult Control Problems", IEEE Transactions on Systems, Man, and Cybernetics,
Vol. SMC-13, No.5, [Barto, Sutton Watkins 89] A. G. Barto, R. S. Sutton, and
C. J. C. H. Watkins, "Learning and Sequential Decision Making", University of
Massachusetts at Amherst COINS Technical Report 89-95, September 1989 [Chrisley
90] R. L. Chrisley, "Cognitive Map Construction and Use: A Parallel Dis tributed
Approach", in Touretzky, Elman, Sejnowski, and Hinton, eds., Con nectionist Models:
Proceedings of the 1990 Summer School, Morgan Kaufmann [Jordan Jacobs 90] M.
I. Jordan and R. A. Jacobs, "Learning to Control an Un stable System with Forward
Modeling", in D. S. Touretzky, ed., Advances in Neural Information Processing
Systems 2, Morgan Kaufmann 1990 [Jordan Rumelhart 90] M. I. Jordan and D. E.
Rumelhart, "Supervised learning with a Distal Teacher" , preprint. [Nguyen Widrow
90] D. Nguyen and B. Widrow, ''''The Truck Backer-Upper: An Example of Self-Learning
in Neural Networks", in Miller, Sutton and Werbos, eds., Neural Networks for Control,
MIT Press 1990 [Sutton 88] R. S. Sutton, "Learning to Predict by the Methods of
Temporal Differ ences", Machine Learning 3: 9-44, 1988 [Werbos 90] P. Werbos,
"Architectures for Reinforcement Learning", in Miller, Sut ton and Werbos, eds.,
Neural Networks for Control, MIT Press 1990'
- 'Introduction Kernel machines have recently gained a lot of attention due to the
popularisation of the support vector machine (SVM) [13] with a focus on classification
and the revival of Gaussian Processes (GP) for regression [15]. Subsequently,
SVMs have been modified to handle regression [12] and GPs have been adapted to
the problem of classification [8]. Both schemes essentially work in the same function
space that is characterised by kernels (SVM) and covariance functions (GP), respectively.
While the formal similarity of the two methods is striking the underlying paradigms
of inference are very different. The SVM was inspired by results from statisticalPAC
learning theory while GPs are usually considered in a Bayesian framework. This
ideological clash can be viewed as a continuation in machine learning of the by
now classical disagreement between Bayesian and frequentistic statistics. With
regard to algorithmics the two schools of thought appear to favour two different
methods of learning and predicting: the SVM community - as a consequence of the
formulation of the SVM as a quadratic programming problem - focuses on learning
as optimisation while the Bayesian community favours sampling schemes based on
the Bayesian posterior. Of course there exists a strong relationship between the
two ideas, in particular with the Bayesian maximum a posteriori (MAP) estimator
being the solution of an optimisation problem. Interestingly, the two viewpoints
have recently been reconciled theoretically in the so-called PAC-Bayesian framework
[5] that combines the idea of a Bayesian prior with PAC-style performance guarantees
and has been the basis of the so far tightest margin bound for SVMs [3]. In practice,
optimisation based algorithms have the advantage of a unique, deterministic solution
and the availability of the cost function as an indicator for the quality of the
solution. In contrast, Bayesian algorithms based on sampling and voting are more
flexible and have the so-called "anytime" property, providing a relatively good
solution at any point in time. Often, however, they suffer from the computational
costs of sampling the Bayesian posterior. In this contribution we review the idea
of the Bayes point machine (BPM) as an approximation to Bayesian inference for
linear classifiers in kernel space in Section 2. In contrast to the GP viewpoint
we do not define a Gaussian prior on the length Ilwllx: of the weight vector.
Instead, we only consider weight vectors of length Ilwllx: 1 because it is only
the spatial direction of the weight vector that matters for classification. It
is then natural to define a uniform prior on the resulting ball shaped hypothesis
space. Hence, we determine the centre of mass ("Bayes point") of the resulting
posterior that is uniform in version space, i.e. in the zero training error region.
While the version space could be sampled using some form of Gibbs sampling (see,
e.g. [6] for an overview) or an ergodic dynamic system such as a billiard [4]
we suggest to use the perceptron algorithm trained on permutations of the training
set for sampling in Section 3. This extremely simple sampling scheme proves to
be efficient enough to make the BPM applicable to large data sets. We demonstrate
this fact in Section 4 on the well-known MNIST data set containing 60 000 samples
of handwritten digits and show how an approximation to the posterior probability
of classification provided by the BPM can even be used for test-point rejection
leading to a great reduction in generalisation error on the remaining samples.
We denote n-tuples by italic bold letters (e.g. x (Xl, ... ,xn )), vectors by
roman bold letters (e.g. x), random variables by sans serif font (e.g. X) and
vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E and
I denote a prob ability measure, the expectation of a random variable and the
indicator function, respectively. 2 Bayes Point Machines Let us consider the task
of classifying patterns X E X into one of the two classes y E Y {-1, 1} using
functions h : X Y from a given set 1t known as the hypothesis space. In this
paper we shall only be concerned with linear classifiers: where : X K i is
known I as the feature map and has to fixed beforehand. If all that is needed
for learning and classification are the inner products (., .)x: in the feature
space K, it is convenient to specify only by its inner product function 1 For
notational convenience we shall abbreviate cf (x) by x. This should not be confused
with the set x of training points. k : X X X -t IR known as the kernel, i.e. For
simplicity, let us assume that there exists a classifier2 w E W that labels all
This assumption can easily be relaxed by introducing slack variables as done in
the soft margin variant of the SVM. Then given a training set z (x, y) of m points
Xi together with their classes Yi assigned by hw'' drawn iid from an unknown data
distribution P z PYIXP X we can assume the existence of a version space V (z),
i.e. the set of all classifiers w E W consistent with z: In a Bayesian spirit
we incorporate all of our prior knowledge about w into a prior distribution Pw
over W. In the absence of any a priori knowledge we suggest a uniform prior over
the spatial direction of weight vectors w. Now, given the training set z we update
our prior belief by Bayes'' formula, i.e. ifwEV(Z) { otherwise where the first
line follows from the independence and the fact that x has no depen dence on w
and the second line follows from (2) and (3). The Bayesian classification of a
novel test point x is then given by Bay esz (x) argmaxyEy Pw1zm z ({hw (x) y})
sign (EWlzmz [hw (x)]) Unfortunately, the strategy Bayes z is in general not contained
in the set 1-l of classifiers considered beforehand. Since Pw1zmz is only non-zero
inside version space, it has been suggested to use the centre of mass w crn as
an approximation for Bayes z , i.e. This classifier is called the Bayes point.
In a previous work [4] we calculated Wcrn using a first order Markov chain based
on a billiard-like algorithm (see also [10]). We entered the version space V (z)
using a perceptron algorithm and started play ing billiards in version space V
(z) thus creating a sequence of pseudo-random samples Wi due to the chaotic nature
of the billiard dynamics. Playing billiards in V (z) is possible because each
training point (Xi, Yi) E z defines a hyperplane {w E W I Yi (Xi, w}JC O} W.
Hence, the version space is a convex polyhedron on the surface of W. After N bounces
of the billiard ball the Bayes point was estimated by 2We synonymously call h
E 11. and w E W a classifier because there is a one-to-one correspondence between
the two by virtue of (1). Although this algorithm shows excellent generalisation
performance when compared to state-of-the art learning algorithms like support
vector machines (SVM) [13], its effort scales like 0 (m2 ) and 0 (N . m 2 ) in
terms of memory and computational requirements, respectively. 3 Sampling the Version
Space Clearly, all we need for estimating the Bayes point (4) is a set of classifiers
W drawn uniformly from V (z). In order to save computational resources it might
be advan tageous to achieve a uniform sample only approximately. The classical
perceptron learning algorithm offers the possibility to obtain up to m! different
classifiers in ver sion space simply by learning on different permutations of
the training set. Given a permutation II : {I, ... , m} - {I, ... , m} the perceptron
algorithm works as follows: 1. Start with Wo 0 and t O. A classical theorem
due to Novikoff [7] guarantees the convergence of this procedure and furthermore
provides an upper bound on the number t of mistakes needed until convergence.
More precisely, if there exists a classifier WSVM with margin then the number
of mistakes until convergence - which is an upper bound on the sparsity of the
solution - is not more than R2 (x) y;2 (WSVM), where R (x) is the smallest real
number such that V x Ex: II (x) II K. :::; R (x). The quantity ''Y (WSVM) is
maximised for the solution WSVM found by the SVM, and whenever the SVM is theoretically
justified by results from learning theory (see [11, 13]) the ratio d R2 (x) ''Y;2
(WSVM) is considerably less than m, say d« m. Algorithmically, we can benefit
from this sparsity by the following "trick": since all we need to store is the
m-dimensional vector o. Furthermore, we keep track of the m-dimensional vector
0 of real valued outputs of the current solution at the i-th training point. By
definition, in the beginning 0 00. Now, if 0i :::; 0 we update Qi by Qi Yi and
update 0 by OJ OJ Yik (Xi, Xj) which requires only m kernel calculations. In
summary, the memory requirement of this algorithm is 2m and the number of kernel
calculations is not more than dm. As a consequence, the computational requirement
of this algorithm is no more than the computational requirement for the evaluation
ofthe margin ''Y (WSVM)! We suggest to use this efficient perceptron learning
algorithm in order to obtain samples Wi for the computation of the Bayes point
by (4). (a) (b) (c) Figure 1: (a) Histogram of generalisation errors (estimated
on a test set) using a kernel Gibbs sampler. (b) Histogram of generalisation errors
(estimated on a test set) using a kernel perceptron. (c) QQ plot of distributions
(a) and (b). The straight line indicates that both distribution are very similar.
In order to investigate the usefulness of this approach experimentally, we compared
the distribution of generalisation errors of samples obtained by perceptron learning
on permuted training sets (as suggested earlier by [14]) with samples obtained
by a full Gibbs sampling [2]. For computational reasons, we used only 188 training
patterns and 453 test patterns of the classes "I" and "2" from the MNIST data
set3 . In Figure 1 (a) and (b) we plotted the distribution over 1000 random samples
using Using a quantile-quantile (QQ) plot technique we can compare both distributions
in one graph (see Figure 1 (c)). These plots suggest that by simple permutation
of the training set we are able to obtain a sample of classifiers exhibiting the
same generalisation error distribution as with time-consuming Gibbs sampling.
4 Experimental Results In our large scale experiment we used the full MNIST data
set with 60000 training examples and 10000 test examples of 28 x 28 grey value
images of handwritten digits. As input vector x we used the 784 dimensional vector
of grey values. The images were labelled by one of the ten classes "0" to "I".
For each of the ten classes y {O, ... , 9} we ran the perceptron algorithm N 10
times each time labelling all training points of class y by 1 and the remaining
training points by -1. On an Ultra Sparc 10 each learning trial took approximately
20 - 30 minutes. For the classification of a test image x we calculated the real-valued
output of all 100 different classifiers5 by where we used the kernel k given by
(5). (Oi)j refers to the expansion coefficient corresponding to the i-th classifier
and the j-th data point. Now, for each of the 3 available at http:wvw .research.
att. comryannocrmnist. 4We decided to use this kernel because it showed excellent
generalisation performance when using the support vector machine. 5For notational
simplicity we assume that the first N classifiers are classifiers for the class
"0", the next N for class "1" and so on. rejection rate generalisation error rejection
rate Figure 2: Generalisation error as a function of the rejection rate for the
MNIST data set. The SVM achieved 1.4 without rejection as compared to 1.46 for
the BPM. Note that by rejection based on the real-valued output the generalisation
error could be reduced to 0.1 indicating that this measure is related to the probability
of misclassification of single test points. ten classes we calculated the real-valued
decision of the Bayes point Wy by In a Bayesian spirit, the final decision was
carried out by Note that ibp,y (x) [9] can be interpreted as an (unnormalised)
approximation of the posterior probability that x is of class y when restricted
to the function class (1). In order to test the dependence of the generalisation
error on the magnitude max y ibp,y (x) we fixed a certain rejection rate r E [0,1]
and rejected the set of r 10000 test points with the smallest value of max y ibp,y
(x). The resulting plot is depicted in Figure 2. As can be seen from this plot,
even without rejection the Bayes point has excellent generalisation performance6
. Furthermore, rejection based on the real-valued out put ibp (x) turns out to
be excellent thus reducing the generalisation error to 0.1. One should also bear
in mind that the learning time for this simple algorithm was comparable to that
of SVMs. A very advantageous feature of our approach as compared to SVMs are its
adjustable time and memory requirements and the "anytime" availability of a solution
due to sampling. If the training set grows further and we are not able to spend
more time with learning, we can adjust the number N of samples used at the price
of slightly worse generalisation error. 5 Conclusion In this paper we have presented
an algorithm for approximating the Bayes point by rerunning the classical perceptron
algorithm with a permuted training set. Here we 6Note that the best know result
on this data set if 1.1 achieved with a polynomial kernel of degree four. Nonetheless,
for reason of fairness we compared the results of both algorithms using the same
kernel. particularly exploited the sparseness of the solution which must exist
whenever the success of the SVM is theoretically justified. The restriction to
the zero training error case can be overcome by modifying the kernel as This technique
is well known and was already suggested by Vapnik in 1995 (see [1]). Another interesting
question raised by our experimental findings is the following: By how much is
the distribution of generalisation errors over random samples from version space
related to the distribution of generalisation errors of the up to m! different
classifiers found by the classical perceptron algorithm? Acknowledgements We would
like to thank Bob Williamson for helpful dis cussions and suggestions on earlier
drafts. Parts of this work were done during a research stay of both authors at
the ANU Canberra. References [1) C. Cortes and V. Vapnik. Support Vector Networks.
Machine Learning, 20:273-297, [2) T. Graepel and R. Herbrich. The kernel Gibbs
sampler. In Advances in Neural Information System Processing 13, 200l. [3) R.
Herbrich and T . Graepel. A PAC-Bayesian margin bound for linear classifiers:
Why SVMs work. In Advances in Neural Information System Processing 13, 200l. [4)
R. Herbrich, T . Graepel, and C. Campbell. Robust Bayes Point Machines. In Pro
[5) D. A. McAliester. Some PAC Bayesian theorems. In Proceedings of the Eleventh
An nual Conference on Computational Learning Theory, pages 230-234, Madison, Wis
[6) R. M. Neal. Markov chain monte carlo method based on ''slicing'' the density
function. Technical report, Department of Statistics, University of Toronto, 1997.
TR -9722. [7) A . Novikoff. On convergence proofs for perceptrons. In Report at
the Symposium on Mathematical Theory of Automata , pages 24-26, Politechnical
Institute Brooklyn, [8) M. Opper and O. Winther . Gaussian processes for classification:
Mean field algo rithms. Neural Computation, 12(11), 2000. [9) J. Platt. Probabilities
for SV machines. In Advances in Large Margin Classifiers, [10) P. Rujan and M
. Marchand . Computing the bayes kernel classifier. In Advances in Large Margin
Classifiers, pages 329-348. MIT Press, 2000. [11) J. Shawe-Taylor, P. L . Bartlett,
R. C. Williamson, and M . Anthony . Structural risk minimization over data-dependent
hierarchies. IEEE Transactions on Information [12) A. J. Smola. Learning with
Kernels. PhD thesis, Technische Universitat Berlin, 1998. [13) V. Vapnik. The
Nature of Statistical Learning Theory. Springer, 1995. [14) T. Watkin. Optimal
learning with a neural network. Europhysics Letters, 21:871-877, [15) C. Williams.
Prediction with Gaussian Processes: From linear regression to linear prediction
and beyond. Technical report, Neural Computing Research Group , Aston'
- source_sentence: Mathematical analysis of coarse-coded symbol memories in neural
networks
sentences:
- 'Introduction Measuring ways by which several neurons in the brain participate
in a specific computational task can shed light on fundamental neural information
processing mechanisms . While it is unlikely that complete information from any
macroscopic neural tissue will ever be available, some interesting insight can
be obtained from simultaneously recorded cells in the cortex of behaving animals.
The question we address in this study is the level of synergy, or the level of
cooperation, among brain cells, as determined by the information they provide
about the observed behavior of the animal. 1.1 The experimental data We analyze
simultaneously recorded units from behaving monkeys during a delayed response
behavioral experiment. The data was collected at the high brain function laboratory
of the Haddassah Medical School of the Hebrew universitY[l, 2]. In this task the
monkey had to remember the location of a visual stimulus and respond by touching
that location after a delay of 1-32 sec. Correct responses were rewarded by a
drop of juice. In one set of recordings six micro-electrodes were inserted simultaneously
to the frontal or prefrontal cortex[l, 3]. In another set of experiments the same
behavioral paradigm was used and recording were taken from the striatum - which
is the first station in basal ganglia (a sub-cortical ganglia)[2]. The cells recorded
in the striatum were the tonically active neurons[2], which are known to be the
cholinergic inter-neurons of the striatum. These cells are known to respond to
reward. The monkeys were trained to perform the task in two alternating modes
, "Go" and "N o-Go" [1]. Both sets of behavioral modes can be detected from the
recorded spike trains using several statistical modeling techniques that include
Hidden Markov Models (HMM) and Post Stimulus Histograms (PSTH). The details of
these detec tion methods are reported elsewhere[4, 5]. For this paper it is important
to know that we can significantly detect the correct behavior, for example in
the "Go" vs. the "No-Go" correct detection is achieved about 90 of the time, where
the random is 50 and the monkey''s average performance is 95 correct on this task.
2 Theoretical background Our measure of synergy level among cells is information
theoretic and was recently proposed by Brenner et. aZ. [6] for analysis of spikes
generated by a single neuron. This is the first application of this measure to
quantify cooperativity among neurons. 2.1 Synergy and redundancy A fundamental
quantity in information theory is the mutual information between two random variables
X and Y. It is defined as the cross-entropy (Kullbak-Liebler divergence) between
the joint distribution of the variables, p(x, y), and the product of the marginal
distributions p(x)p(y). As such it measures the statistical depen dence of the
variables X and Y. It is symmetric in X and Y and has the following Synergy and
Redundancy among Brain Cells of Behaving Monkeys 113 familiar relations to their
entropies[7]: When given three random variables X I, X 2 and Y, one can consider
the mutual information between the joint variables (XI,X2 ) and the variable Y,
I(XI'' X 2; Y) (notice the position of the semicolon), as well as the mutual infor
mations I(XI; Y) and I(X2; Y). Similarly, one can consider the mutual informa
tion between Xl and X 2 conditioned on a given value of Y y, I(XI; X21y) DKL[P(X
I,X2Iy)IP(Xl ly)P(X2Iy)]'' as well as its average, the conditional mutual information,
Following Brenner et. al.[6] we define the synergy level of Xl and X2 with respect
to the variable Y as with the natural generalization to more than two variables
X . This expression can be rewritten in terms of entropies and conditional information
as follows: Depends On Y Independent of Y When the variables exhibit positive
synergy value, with respect to the variable Y, they jointly provide more information
on Y than when considered independently, as expected in synergetic cases. Negative
synergy values correspond to redundancy - the variables do not provide independent
information about Y. Zero synergy value is obtained when the variables are independent
of Y or when there is no change in their dependence when conditioned on Y. We
claim that this is a useful measure of cooperativity among neurons, in a given
computational task. It is clear from Eq.( 3) that if since in that case L yP(y)Iy(XI;X2) I(XI;X2).
In other words, the synergy value is not zero only if the statistical dependence,
hence the mutual information between the variables, is affected by the value of
Y . It is positive when the mutual information increase, on the average, when
conditioned on Y, and negative if this conditional mutual information decrease.
Notice that the value of synergy can be both positive and negative since information,
unlike entropy, is not sub-additive in the X variables. 114 1. Gat and N Tishby
3 Synergy among neurons Our measure of synergy among the units is based on the
ability to detect the behavioral mode from the recorded activity, as we discuss
bellow. As discussed above, synergy among neurons is possible only if their statistical
dependence change with time. An important case where synergy is not expected is
pure "population coding" [8]. In this case the cells are expected to fire independently,
each with its own fixed tuning curve. Our synergy value can thus be used to test
if the recorded units are indeed participating in a pure population code of this
kind, as hypothesized for certain motor cortical activity. Theoretical models
of the cortex that clearly predict nonzero synergy include at tractor neural networks
(ANN)[9] and synfire chain models(SFC)[3]. Both these models predict changes in
the collective activity patterns, as neurons move between attractors in the ANN
case, or when different synfire-chains of activity are born or disappear in the
SFC case. To the extent that such changes in the collective activity depend on
behavior, nonzero synergy values can be detected. It remains an interesting theoretical
challenge to estimate the quantitative synergy values for such models and compare
it to observed quantities. 3.1 Time-dependent cross correlations In our previous
studies[4] we demonstrated, using hidden Markov models of the activity, that the
pairwise cross-correlations in the same data can change signifi cantly with time,
depending on the underlying collective state of activity. These states, revealed
by the hidden Markov model, in turn depend on the behavior and enable its prediction.
Dramatic and fast changes in the cross-correlation of cells has also been shown
by others[lO]. This finding indicate directly that the statistical dependence
of the neurons can change (rapidly) with time, in a way correlated to behavior.
This clearly suggests that nonzero synergy should be observed among these cortical
units, relative to this behavior. In the present study this theoretical hypothesis
is verified. 3.2 Redundancy cases If on the other hand the conditioned mutual
information equal zero for all behavioral modes, i.e. Iy(Xl; X2) 0 Vy E Y , while
I(Xl; X 2) 0, we expect to get negative synergy, or redundancy among the cells,
with respect to the behavior variable Y. We observed clear redundancy in another
part of the brain, the basal ganglia, dur ing the same experiment, when the behavior
was the pre-reward and post-reward activity. In this case different cells provide
exactly the same information, which yields negative synergy values. 4 Experimental
results 4.1 Synergy measurement in practice To evaluate the synergy value among
different cells, it is necessary to estimate the conditional distribution p(ylx)
where y is the current behavior and x represent a single trial of spike trains
of the considered cells. Estimating this probability, Synergy and Redundancy among
Brain Cells of Behaving Monkeys 115 however, requires an underlying statistical
model, or a represented of the spike trains. Otherwise there is never enough data
since cortical spike trains are never exactly reproducible. In this work we choose
the rate representation, which is the simplest to evaluate. The estimation of
p(ylx) goes as follows: For each of the M behavioral modes (Y1, Y2 .. , YM) collect
spike train samples (the tmining data set). Using the training sample, construct
a Post Stimulus Time Histogram (PSTH), i.e. the rate as function of time, for
each behavioral mode. Given a spike train, outside of the training set, compute
its probability to be result in each of the M modes. The spike train considered
correctly classified if the most probable mode is in fact the true behavioral
mode, and incorrectly otherwise. The fraction of correct classification, for all
spike trains of a given behavioral mode Yi, is taken as the estimate of P(Yi Ix),
and denoted pc., where Ci 1S the identity of the cells used in the computation.
For the case of only two categories of behavior and for a uniform distribution
of the different categories, the value of the entropy H(Y) is the same for all
combinations of cells, and is simply H (Y) - Ly p(y) log2 (p(y)) log22 1. The
full expression (in bits) for the synergy value can be thus written as follows:
If the first expression is larger than the second than there is (positive) synergy
and vice versa for redundancy. However there is one very important caveat. As
we saw the computation of the mutual information is not done exactly, and what
one really computes is only a lower bound . If the bound is tighter for multiple
cell calculation, the method could falsely infer positive synergy, and if the
bound is tighter for the single cell computation, the method could falsely infer
negative synergy. In previous works we have shown that the method we use for this
estimation is quite reasonable and robust[5], therefore, we believe that we have
even a conservative (i.e. less positive) estimate of synergy. 4.2 Observed synergy
values In the first set of experiments we tried to detect the behavioral mode
during the delay-period of correct trials. In this case the two types of behavior
were the "Go" and the "No-Go" described in the introduction. An example of this
detection problem is given in figure lAo In this figure there are 100 examples
of multi-electrode recording of spike trains during the delay period. On the left
is the "Go-mode" data and on the right the "No-Go mode", for two cells. On the
lower part there is an example of two single spike trains that need to be classified
by the mode models. 116 I. Gat and N. Tishby Figure 1: Raster displays of simultaneously
recorded cells in the 2 different areas, in each area there were 2 behavioral
modes. Table 1 gives some examples of detection results obtained by using 2 cells
indepen dently, and by using their joint combination. It can be seen that the
synergy is positive and significant. We examined 19 recording session of the same
behavioral modes for two different animals and evaluated the synergy value. In
18 out of the 19 sessions there was at least one example of significant positive
synergy among the cells. For comparison we analyzed another set of experiments
in which the data was recorded from the striatum in the basal ganglia. An example
for this detection is shown in figure lB. The behavioral modes were the "pre-reward"
vs. the "post reward" periods. Nine recording sessions for the two different monkeys
were exam ined using the same detection technique. Although the detection results
improve when the number of cells increase, in none of these recordings a positive
synergy value was found. For most of the data the synergy value was close to zero,
i.e. the mutual information among two cells jointly was close to the sum of the
mutual infor mation of the independent cells, as expected when the cells exhibit
(conditionally) independent activity. The prevailing difference between the synergy
measurements in the cortex and in the TAN s'' of the basal ganglia is also strengthen
by the different mechanisms underlying those cells. The TANs'' are assumed to
be globally mediators of information in the striatum, a relatively simple task,
whereas the information processed in the frontal cortex in this task is believed
to be much more collective and complicated. Here we suggest a first handle for
quantitative detection of such different neuronal activities. Acknowledgments
Special thanks are due to Moshe Abeles for his encouragement and support, and
to William Bialek for suggesting the idea to look for the synergy among cortical
cells. We would also like to thank A. Raz, Hagai Bergman, and Eilon Vaadia for
sharing their data with us. The research at the Hebrew university was supported
in part by a grant from the Unites States Israeli Binational Science Foundation
(BSF). Synergy and Redundancy among Brain Cells of Behaving Monkeys 117 Table
1: Examples of synergy among cortical neurons. For each example the mutual information
of each cell separately is given together with the mutual information of the pair.
In parenthesis the matching detection probability (average over p(ylx)) is also
given. The last column gives the percentage of increase from the mutual information
of the single cells to the mutual information of the pair. The table gives only
those pairs for which the percentage was larger than 20 and the detection rate
higher than 60. Session Cells CellI Ce1l2 Both cells Syn () References [1] M.
Abeles, E. Vaadia, H. Bergman, Firing patterns of single unit in the pre frontal
cortex and neural-networks models., Network 1 (1990). [2] E. Raz , et al Neuronal
synchronization of tonically active neurons in the striatum of normal and parkinsonian
primates, J. Neurophysiol. 76:2083-2088 [3] M. Abeles, Corticonics, (Cambridge
University Press, 1991). [4] I. Gat , N. Tishby and M. Abeles, Hidden Markov modeling
of simultaneously recorded cells in the associative cortex of behaving monkeys,
Network,8:297-322 [5] I. Gat, N. Tishby, Comparative study of different supervised
detection methods of simultaneously recorded spike trains, in preparation. [6]
N. Brenner, S.P. Strong, R. Koberle, W. Bialek, and R. de Ruyter van Steveninck,
The Economy of Impulses and the Stiffnes of Spike Trains, NEC Research Institute
Technical Note (1998). [7] T.M . Cover and J.A. Thomas, Elements of Information
Theory., (Wiley NY, [8] A.P. Georgopoulos, A.B. Schwartz, R.E. Kettner, Neuronal
Population Coding [9] D.J. Amit, Modeling Brain Function, (Cambridge University
Press, 1989). [10] E. Ahissar et al Dependence of Cortical Plasticity on Correlated
Activity of Single Neurons and on Behavioral Context, Science, 257:1412-1415 (1992).'
- 'Introduction A di8tributed repre8entation is a memory scheme in which each entity
(concept, symbol) is represented by a pattern of activity over many units [3].
If each unit participates in the representation of many entities, it is said to
be coar8ely tuned, and the memory itself is called a coar8e-coded memory. Coarse-coded
memories have been used for storing symbols in several neural network symbol processing
models, such as Touretzky and Hinton''s distributed connectionist production system
DCPS [8,9], Touretzky''s distributed implementation of linked list structures
on a Boltzmann machine, BoltzCONS [10], and St. John and McClelland''s PDP model
of case role defaults [6]. In all of these models, memory capacity was mea sured
empirically and parameters were adjusted by trial and error to obtain the desired
behavior. We are now able to give a mathematical foundation to these experiments
by analyzing the relationships among the fundamental memory parameters. There
are several paradigms for coarse-coded memories. In a feature-based repre- 8entation,
each unit stands for some semantic feature. Binary units can code features with
binary values, whereas more complicated units or groups of units are required
to code more complicated features, such as multi-valued properties or numerical
values from a continuous scale. The units that form the representation of a concept
define an intersection of features that constitutes that concept. Similarity between
concepts composed of binary Ceatures can be measured by the Hamming distance between
their representations. In a neural network implementation, relationships between
concepts are implemented via connections among the units forming their representations.
Certain types of generalization phenomena thereby emerge automatically. A different
paradigm is used when representing points in a multidimensional contin uous space
[2,3]. Each unit encodes values in some subset of the space. Typically the American
Institute of Physics 1988 653 subsets are hypercubes or hyperspheres, but they
may be more coarsely tuned along some dimensions than others [1]. The point to
be represented is in the subspace formed by the intersection of all active units.
AB more units are turned on, the accuracy of the representation improves. The
density and degree of overlap of the units'' receptive fields determines the system''s
resolution [7]. Yet another paradigm for coarse-coded memories, and the one we
will deal with exclusively, does not involve features. Each concept, or symbol,
is represented by an arbitrary subset of the units, called its pattern. Unlike
in feature-based representations, the units in the pattern bear no relationship
to the meaning of the symbol represented. A symbol is stored in memory by turning
on all the units in its pattern. A symbol is deemed present if all the units in
its pattern are active.l The receptive field of each unit is defined as the set
of all symbols in whose pattern it participates. We call such memories coarse
coded symbol memories (CCSMs). We use the term "symbol" instead of "concept" to
emphasize that the internal structure of the entity to be represented is not involved
in its representation. In CCSMs, a short Hamming distance between two symbols
does not imply semantic similarity, and is in general an undesirable phenomenon.
The efficiency with which CCSMs handle sparse memories is the major reason they
have been used in many connectionist systems, and hence the major reason for studying
them here. The unit-sharing strategy that gives rise to efficient encoding in
CCSMs is also the source of their major weakness. Symbols share units with other
symbols. AB more symbols are stored, more and more of the units are turned on.
At some point, some symbol may be deemed present in memory because all of its
units are turned on, even though it was not explicitly stored: a "ghost" is born.
Ghosts are an unwanted phenomenon arising out of the overlap among the representations
of the various symbols. The emergence of ghosts marks the limits of the system''s
capacity: the number of symbols it can store simultaneously and reliably. 2 Definitions
and Fundamental Parameters A coarse coded symbol memory in its most general form
consists of: A set of N binary state units. An alphabet of Q symbols to be represented.
Symbols in this context are atomic entities: they have no constituent structure.
A memory scheme, which is a function that maps each symbol to a subset of the
units - its pattern. The receptive field of a unit is defined as the set of all
symbols to whose pattern it belongs (see Figure 1). The exact nature of the lThis
criterion can be generalized by introducing a visibility threshold: a fraction
of the pattern that should be on in order for a symbol to be considered present.
Our analy sis deals only with a visibility criterion of 100, but can be generalized
to accommodate Figure 1: A memory scheme (N 6, Q 8) defined in terms of units
Us and symbols 8;. The columns are the symbols'' patterns. The rows are the units''
receptive fieldB. memory scheme mapping determines the properties of the memory,
and is the central target of our investigation. As symbols are stored, the memory
fills up and ghosts eventually appear. It is not possible to detect a ghost simply
by inspecting the contents of memory, since there is no general way of distinguishing
a symbol that was stored from one that emerged out of overlaps with other symbols.
(It is sometimes possible, however, to conclude that there are no ghosts.) Furthermore,
a symbol that emerged as a ghost at one time may not be a ghost at a later time
if it was subsequently stored into memory. Thus the definition of a ghost depends
not only on the state of the memory but also on its history. Some memory schemes
guarantee that no ghost will emerge as long as the number of symbols stored does
not exceed some specified limit. In other schemes, the emergence of ghosts is
an ever-present possibility, but its probability can be kept arbitrarily low by
adjusting other parameters. We analyze systems of both types. First, two more
bits of notation need to be introduced: Pghost: Probability of a ghost. The probability
that at least one ghost will appear after some number of symbols have been stored.
k: Capacity. The maximum number of symbols that can be stored simultaneously before
the probability of a ghost exceeds a specified threshold. If the threshold is
0, we say that the capacity is guaranteed. A localist representation, where every
symbol is represented by a single unit and every unit is dedicated to the representation
of a single symbol, can now be viewed as a special case of coarse-coded memory,
where k N Q and Pghost o. Localist representations are well suited for memories
that are not sparse. In these cases, coarse coded memories are at a disadvantage.
In designing coarse-coded symbol memories we are interested in cases where k «
N « Q. The permissible probability for a ghost in these systems should be low
enough so that its impact can be ignored. 655 3 Analysis of Four Memory Schemes
3.1 Bounded Overlap (guaranteed capacity) If we want to construct the memory scheme
with the largest possible a (given Nand k) while guaranteeing Pghost 0, the problem
can be stated formally as: Given a set of size N, find the largest collection
of subsets of it such that no union of k such subsets subsumes any other subset
in the collection. This is a well known problem in Coding Theory, in slight disguise.
Unfortunately, no complete analytical solution is known. We therefore simplify
our task and consider only systems in which all symbols are represented by the
same number of units (i.e. all patterns are of the same size). In mathematical
terms, we restrict ourselves to constant weight codes. The problem then becomes:
Given a set of size N, find the largest collection of subsets of size exactly
L such that no union of k such subsets subsumes any other subset in the collection.
There are no known complete analytical solutions for the size of the largest collection
of patterns even when the patterns are of a fixed size. Nor is any efficient procedure
for constructing such a collection known. We therefore simplify the problem further.
We now restrict our consideration to patterns whose pairwise overlap is bounded
by a given number. For a given pattern size L and desired capacity k, we require
that no two patterns overlap in more than m units, where: Memory schemes that
obey this constraint are guaranteed a capacity of at least k symbols, since any
k symbols taken together can overlap at most L - 1 units in the pattern of any
other symbol - one unit short of making it a ghost. Based on this constraint,
our mathematical problem now becomes: Given a set of size N, find the largest
collection of subsets of size exactly L such that the intersection of any two
such subsets is of size m (where m is given by equation 1.) Coding theory has
yet to produce a complete solution to this problem, but several methods of deriving
upper bounds have been proposed (see for example [4]). The simple formula we use
here is a variant of the Johnson Bound. Let abo denote the maximum number of symbols
attainable in memory schemes that use bounded overlap. Then The Johnson bound
is known to be an exact solution asymptotically (that is, when N, L, m - 00 and
their ratios remain finite). Since we are free to choose the pattern size, we
optimize our memory scheme by maximizing the above expression over all possible
values of L. For the parameter sub space we are interested in here (N 1000, k 50)
we use numerical approximation to obtain: (Recall that m is a function of Land
k.) Thus the upper bound we derived depicts a simple exponential relationship
between Q and Nk. Next, we try to construct memory schemes of this type. A Common
Lisp program using a modified depth-first search constructed memory schemes for
various parameter values, whose Q''S came within 80 to 90 of the upper bound.
These results are far from conclusive, however, since only a small portion of
the parameter space was tested. In evaluating the viability of this approach,
its apparent optimality should be con trasted with two major weaknesses. First,
this type of memory scheme is hard to construct computationally. It took our program
several minutes of CPU time on a Symbolics 3600 to produce reasonable solutions
for cases like N 200, k 5, m 1, with an exponential increase in computing time
for larger values of m. Second, if CC SMs are used as models of memory in naturally
evolving systems (such as the brain), this approach places too great a burden
on developmental mechanisms. The importance of the bounded overlap approach lies
mainly in its role as an upper bound for all possible memory schemes, subject
to the simplifications made earlier. All schemes with guaranteed capacities can
be measured relative to equation 3. 3.2 Random Fixed Size Patterns (a stochastic
approach) Randomly produced memory schemes are easy to implement and are attractive
because of their naturalness. However, if the patterns of two symbols coincide,
the guaranteed capacity will be zero (storing one of these symbols will render
the other a ghost). We therefore abandon the goal of guaranteeing a certain capacity,
and instead establish a tolerance level for ghosts, Pghost. For large enough memories,
where stochastic behavior is more robust, we may expect reasonable capacity even
with very small Pghost. In the first stochastic approach we analyze, patterns
are randomly selected subsets of a fixed size L. Unlike in the previous approach,
choosing k does not bound Q. We may define as many symbols as we wish, although
at the cost of increased probability of a ghost (or, alternatively, decreased
capacity). The probability of a ghost appearing after k symbols have been stored
is given by Equation 4: TN,L(k, e) is the probability that exactly e units will
be active after k symbols have been stored. It is defined recursively by Equation
5": We have constructed various coarse-coded memories with random fixed-size receptive
fields and measured their capacities. The experimental results show good agreement
with the above equation. The optimal pattern size for fixed values of N, k, and
a can be determined by binary search on Equation 4, since Pghost(L) has exactly
one maximum in the interval [1, N]. However, this may be expensive for large N.
A computational shortcut can be achieved by estimating the optimal L and searching
in a small interval around it. A good initial estimate is derived by replacing
the summation in Equation 4 with a single term involving E[e]: the expected value
of the number of active units after k symbols have been stored. The latter can
be expressed as: The estimated L is the one that maximizes the following expression:
An alternative formula, developed by Joseph Tebelskis, produces very good approx
imations to Eq. 4 and is much more efficient to compute. After storing k symbols
in memory, the probability P z that a single arbitrary symbol x has become a ghost
is given If we now assume that each symbol''s Pz is independent of that of any
other symbol, we obtain: This assumption of independence is not strictly true,
but the relative error was less than 0.1 for the parameter ranges we considered,
when Pghost was no greater than We have constructed the two-dimensional table
TN,L(k, c) for a wide range of (N, L) values (70 N 1000, 7 L 43), and produced
graphs of the relationships between N, k, a, and Pghost for optimum pattern sizes,
as determined by Equation 4. The 658 results show an approximately exponential
relationship between a and N k [5]. Thus, for a fixed number of symbols, the capacity
is proportional to the number of units. Let arlp denote the maximum number of
symbols attainable in memory schemes that use random fixed-size patterns. Some
typical relationships, derived from the data, are: 3.3 Random Receptors (a stochastic
approach) A second stochastic approach is to have each unit assigned to each symbol
with an independent fixed probability s. This method lends itself to easy mathematical
analysis, resulting in a closed-form analytical solution. After storing k symbols,
the probability that a given unit is active is 1 - (1 - s)k (independent of any
other unit). For a given symbol to be a ghost, every unit must either be active
or else not belong to that symbol''s pattern. That will happen with a probability
[1 - s . (1 - s)k] N, and thus the probability of a ghost is: Assuming Pghost
« 1 and k « a (both hold in our case), the expression can be simplified to: from
which a can be extracted: We can now optimize by finding the value of s that maximizes
a, given any desired upper bound on the expected value of Pghost. This is done
straightforwardly by solving BaBs o. Note that 8 N corresponds to L in the previous
approach. The solution is s l(k 1), which yields, after some algebraic manipulation:
A comparison of the results using the two stochastic approaches reveals an interesting
similarity. For large k, with Pghost 0.01 the term 0.468k of Equation 8 can be
seen as a numerical approximation to the log term in Equation 11, and the multiplicative
factor of 0.0086 in Equation 8 approximates Pghost in Equation 11. This is hardly
surprising, since the Law of Large Numbers implies that in the limit (N, k - 00,
with 8 fixed) the two methods are equivalent. 659 Finally, it should be. noted
that the stochastic approaches we analyzed generate a family of memory schemes,
with non-identical ghost-probabilities. Pghost in our formulas is therefore better
understood as an expected value, averaged over the entire family. 3.4 Partitioned
Binary Coding (a reference point) The last memory scheme we analyze is not strictly
distributed. Rather, it is somewhere in between a distributed and a localist representation,
and is presented for comparison with the previous results. For a given number
of units N and desired capacity k, the units are partitioned into k equal-size
"slots," each consisting of N k units (for simplicity we assume that k divides
N). Each slot is capable of storing exactly one symbol. The most efficient representation
for all possible symbols that may be stored into a slot is to assign them binary
codes, using the N k units of each slot as bits. This would allow 2N Jic symbols
to be represented. Using binary coding, however, will not give us the required
capacity of 1 symbol, since binary patterns subsume one another. For example,
storing the code ''10110'' into one of the slots will cause the codes ''10010'',
''10100'' and ''00010'' (as well as several other codes) to become ghosts. A possible
solution is to use only half of the bits in each slot for a binary code, and set
the other half to the binary complement of that code (we assume that Nk is even).
This way, the codes are guaranteed not to subsume one another. Let Qpbc denote
the number of symbols representable using a partitioned binary coding scheme.
Then, Once again, Q is exponential in N k. The form of the result closely resembles
the estimated upper bound on the Bounded Overlap method given in Equation 3. There
is also a strong resemblance to Equations 8 and 11, except that the fractional
multiplier in front of the exponential, corresponding to Pghost, is missing. Pghost
is 0 for the Parti tioned Binary Coding method, but this is enforced by dividing
the memory into disjoint sets of units rather than adjusting the patterns to reduce
overlap among symbols. As mentioned previously, this memory scheme is not really
distributed in the sense used in this paper, since there is no one pattern associated
with a symbol. Instead, a symbol is represented by anyone of a set of k patterns,
each N k bits long, corresponding to its appearance in one of the k slots. To
check whether a symbol is present, all k slots must be examined. To store a new
symbol in memory, one must scan the k slots until an empty one is found. Equation
12 should therefore be used only as a point of reference. 4 Measurement of DCPS
The three distributed schemes we have studied all use unstructured patterns, the
only constraint being that patterns are at least roughly the same size. Imposing
more com plex structure on any of these schemes may is likely to reduce the capacity
somewhat. In 660 Memory Scheme Result Bounded Overlap Qbo(N, k) eO.367t Random
Fixed-size Patterns Q,,!p(Pghost 0.01) 0.0086. e.468 r Random Receptors Q P
. eN .1og(k1)"''Tl((kl)"''Tlk"'') ,.,. - ghost Partitioned Binary Coding Qpbc
- eO.347r - Table 1 Summary of results for various memory schemes. order to quantify
this effect, we measured the memory capacity of DCPS (BoltzCONS uses the same
memory scheme) and compared the results with the theoretical models analyzed above.
DCPS'' memory scheme is a modified version of the Random Receptors method [5].
The symbol space is the set of all triples over a 25 letter alphabet. Units have
fixed-size receptive fields organized as 6 x 6 x 6 subspaces. Patterns are manipulated
to minimize the variance in pattern size across symbols. The parameters for DCPS
are: N 2000, deviation of 1.5. When Pghost 0.01 the measured capacity was k 48
symbols. By substituting for N in Equation 11 we find that the highest k value
for which Q,.,. 15625 is 51. There does not appear to be a significant cost for
maintaining structure in the receptive fields. 5 Summary and Discussion Table
1 summarizes the results obtained for the four methods analyzed. Some dif ferences
must be emphasiz''ed: Qbo and Qpbc deal with guaranteed capacity, whereas Q,.!p
and Q,.,. are meaningful only for Pghost O. Qbo is only an upper bound. Q,.!p
is based on numerical estimates. Qpbc is based on a scheme which is not strictly
coarse-coded. The similar functional form of all the results, although not surprising,
is aesthetically pleasing. Some of the functional dependencies among the various
parameters can be derived informally using qualitative arguments. Only a rigorous
analysis, however, can provide the definite answers that are needed for a better
understanding of these systems and their scaling properties. 661 Acknowledgments
We thank Geoffrey Hinton, Noga Alon and Victor Wei for helpful comments, and Joseph
Tebelskis for sharing with us his formula for approximating Pghost in the case
of fixed pattern sizes. This work was supported by National Science Foundation
grants IST-8516330 and EET-8716324, and by the Office of Naval Research under
contract number NOOO14-86- K-0678. The first author was supported by a National
Science Foundation graduate fellowship. References [1] Ballard, D H. (1986) Cortical
connections and parallel processing: structure and function. Behavioral and Brain
Sciences 9(1). [2] Feldman, J. A., and Ballard, D. H. (1982) Connectionist models
and their proper ties. Cognitive Science 6, pp. 205-254. [3] Hinton, G. E., McClelland,
J. L., and Rumelhart, D. E. (1986) Distributed repre sentations. In D. E. Rumelhart
and J. L. McClelland (eds.), Parallel Distributed Processing: Explorations in
the Microstructure of Cognition, volume 1. Cambridge, MA: MIT Press. [4] Macwilliams,
F.J., and Sloane, N.J.A. (1978). The Theory of Error-Correcting Codes, North-Holland.
[5] Rosenfeld, R. and Touretzky, D. S. (1987) Four capacity models for coarse-coded
symbol memories. Technical report CMU-CS-87-182, Carnegie Mellon University Computer
Science Department, Pittsburgh, PA. [6] St. John, M. F. and McClelland, J. L.
(1986) Reconstructive memory for sentences: a PDP approach. Proceedings of the
Ohio University Inference Conference. [7] Sullins, J. (1985) Value cell encoding
strategies. Technical report TR-165, Com puter Science Department, University
of Rochester, Rochester, NY. [8] Touretzky, D. S., and Hinton, G. E. (1985) Symbols
among the neurons: details of a connectionist inference architecture. Proceedings
of IJCAI-85, Los Angeles, CA, [9] Touretzky, D. S., and Hinton, G. E. (1986) A
distributed connectionist produc tion system. Technical report CMU-CS-86-172,
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA. [10]
Touretzky, D. S. (1986) BoltzCONS: reconciling connectionism with the recursive
nature of stacks and trees. Proceedings of the Eighth A nnual Conference of the
Cognitive Science Society, Amherst, MA, pp. 522-530.'
- 'INTRODUCTION 1.1 THE M.AUTHNER SYSTEM Much is known about the brainstem system
that controls fast-start escapes in teleost fish. The most prominent feature of
this network is the pair of large Mauthner cells whose axons cross the midline
and descend down the spinal cord to synapse on primary motoneurons. The Mauthner
system also includes inhibitory neurons, the PHP cells, which have a unique and
intense field effect inhibition at the spike initiating zone of the Mauthner cells
(Faber and Korn, 1978). The Mauthner system is part of the full brainstem escape
network which also includes two pairs of cells homologous to the Mauthner cell
and other populations of reticulospinal neurons. With this network fish initiate
escapes only from appropriate stimuli, turn away from the offending stimulus,
and do so very rapidly with a latency around 15 msec in goldfish. The Mauthner
cells play an important role in these functions. Only one 574 Directional Hearing
by the Mauthner System 575 fires thus controlling the direction of the initial
turn, and it fires very quickly (4-5 msec). They also have high thresholds due
to instrinsic membrane properties and the inhibitory inlluence of the PHP cells.
(For reviews, see Eaton, et al, 1991 and Faber and Korn, 1978.) Acoustic stimuli
are thought to be sufficient to trigger the response (Blader, 1981), both Mauthner
cells and PHP cells receive innervation from primary auditory fibers (Faber and
Korn, 1978). In addition, the Mauthner cells have been shown physio logically
to be very sensitive to acoustic pressure (Canfield and Eaton, 1990). 1.2 LOCALIZING
SOUNDS UNDERWATER In contrast to terrestrial vertebrates, there are several reasons
for supposing that fish do not use time of arrival or intensity differences between
the two ears to localize sounds: underwater sound travels over four times as fast
as in air; the fish body provides no acoustic shadow; and fish use a single transducer
to sense pressure which is conveyed equally to the two ears. Sound pressure is
transduced into vibrations by the swim bladder which, in goldfish, is mechanically
linked to the inner ear. Fish are sensitive to an additional component of the
acoustic wave, the particle motion. Any particle ofthe medium taking part in the
propagation of a longitudenal wave will oscillate about an equilibrium point along
the axis of propagation. Fish have roughly the same density as water, and will
experience these oscillations. The motion is detected by the bending of sensory
hairs on auditory receptor cells by the otolith, an inertial mass suspended above
the hair cells. This component of the sound will provide the axis of propagation,
but there is a 180 degree ambiguity. Both pressure and particle motion are sensed
by hair cells of the inner ear. In goldfish these signals may be nearly segregated.
The linkage with the swim bladder impinges primarily on a boney chamber containing
two of the endorgans of the inner ear: the saccule and the lagena. The utricle
is a third endorgan also thought to mediate some acoustic function, without such
direct input from the 3wimbladder. Using both of these components fish can localize
sounds. According to the phase model (Schuijf, 1981) fish analyze the phase difference
between the pressure com ponent of the sound and the particle displacement component
to calculate distance and direction. When pressure is increasing, particles will
be pushed in the direc tion of sound propagation, and when pressure is decreasing
particles will be pulled back. There will be a phase lag between pressure and
particle motion which varies with frequency and distance from the sound source.
This, and the separation of the pressure from the displacement signals in the
ear of some species pose the greatest problems for theories of sound localization
in fish. The acoustically triggered escape in goldfish is a uniquely tractable
problem in underwater sound localization. First, there is the fairly good segregation
of pressure from particle motion at the sensory level. Second I the escape is
very rapid. The decision to turn left or right is equivalent to the firing of
one or the other Mauthner cell, and this happens within about 4 msec. With transmission
delay, this decision relies only on the initial 2 msec or so of the stimulus.
For most salient frequencies, the phase lag will not introduce uncertainty: both
the first and second derivatives of particle position and acoustic pressure will
be either positive or negative. 576 Guzik and Eaton 1.3 THE XNOR MODEL Active
pressure input Left sound source Active displacement input No response Mauthner
output Right Mauthner output .. inhibitory 0- excitatory Figure 1 Truth table
and minimal network for the XNOR model. Given the above simplification of the
problem, we can see that each Mauthner cell must perform a logical operation (Guzik
and Eaton, 1993j Eaton et al, 1994). The left Mauthner cell should fire when sounds
are located on the left, and this occurs when either pressure is increasing and
particle motion is from the left or when pressure is decreasing and particle motion
is from the right. We can call displacement from the left positive for the left
Mauthner cell, and immediately we Directional Hearing by the Mauthner System 577
have the logical operator exclusive-nor (or XNOR). The right Mauthner cell must
solve the same problem with a redefinition of right displacement as positive.
The conditions for this logic gate are shown in figure 1A for both Mauthner cells.
This analysis simplifies our task of understanding the computational role of individual
elements in the system. For example, a minimal network could appear as in figure
In this model PHP units perform a logical sub-task of the XNOR as AND gates. This
model requires at least two functional classes of PHP units on each side of the
brain. These PHP units will be activated for the combinations of pressure and
displacement that indicate a sound coming from the wrong direction for the Mauthner
cell on that side. Both Mauthner cells are activated by sufficient changes in
pressure in either direction, high or low, and will be gated by the PHP cells.
This minimal model emerged from explorations of the system using the connectionist
paradigm, and inspired us to extend our efforts to a more realistic context. 2
THE NETWORK We used a connectionist model to explore candidate solutions to the
leftright dis crimination problem that include the populations of neurons known
to exist and include a distributed input resembling the sort available from the
hair cells of the inner ear. We were interested in generating a number of alternative
solutions to be better prepared to interpret physiological recordings from live
goldfish, and to look for variations of, or alternatives to, the XNOR model. 2.1
THE .ARCHITECTURE As shown in figure 2, there are four layers in the connectionist
model. The input layer consists of four pools of hair cell units. These represent
the sensory neurons of the inner ear. There are two pools on each side: the saccule
and the utricle. Treating only the horizontal plane, we have ignored the lagena
in this model. The saccule is the organ of pressure sensation and the utricle
is treated as the organ of particle motion. Each pool contains 16 hair cell units
maximally responsive for displacements of their sensory hairs in one particular
direction. They are activated as the eosine of the difference between their preferred
direction and the stimulus dellection. All other units use sigmoidal activation
functions. The next layer consists of units representing the auditory fibers of
the VIIIth nerve. Each pool receives inputs from only one pool of hair cell units,
as nerve fibers have not been seen to innervate more than one endorgan. There
are 10 units per fiber The fiber units provide input to both the inhibitory PHP
units, and to the Mauthner units. There are four pools of PHP units, two on each
side of the fish. One set on each side represents the collateral PHP eells, and
the other set represents the commissural PHP cells (Faber and Korn, 1978). Both
types receive inputs from the auditory fibers. The collaterals project only to
the Mauthner cell on the same side. The commissurals project to both Mauthner
cells. There are five units per PHP pool. 578 Guzik and Eaton The Mauthner cell
units receive inputs from saecular and utricular fibers on their same side only,
as well as inputs from a single collateral PHP population and both commissural
PHP populations. Left Saccule Left Utricle Right Utricle Right Saccule Hair Cells
Auditory Nerve Fiber Pools Left Mauthner Right Mautlll1er Figure 2 The architecture.
Weights from the PHP units are all constrained to be negative, while all others
are constrained to be positive. The weights are implemented using the function
below, positive or negative depending on the polarity of the weight. The function
asymptotes to zero for negative values, and to the identity function for values
above 2. This function vastly improved learning compared with the simpler, but
highly nonlinear exponential function used in earlier versions of the model. 2.2
TRAINING We used a total of 240 training examples. We began with a set of 24 directions
for particle motion, evenly distributed around 360 degrees. These each appeared
twice, once with increasing pressure and once with decreasing pressure, making
a base set of 48 examples. Pressure was introduced as a deflection across saccular
hair cells of either 0 degrees for low pressure, or 180 degrees for high pressure.
These should be thought of as reflecting the expansion or compression of the swim
bladder. Targets for the Mauthner cells were either 0 or 1 depending upon the
conditions as described in the XNOR model, in figure lA. Directional Hearing by
the Mauthner System 579 N ext by randomly perturbing the activations of the hair
cells for these 48 patterns, we generated 144 noisy examples. These were randomly
increased or decreased up to 10. An additional 48 examples were generated by dividing
the hair cell adivity by two to represent sub-threshold stimuli. These last 48
targets were set to zero. The network was trained in batch mode with backpropagation
to minimize a cross entropy error measure, using conjugate gradient search. Unassisted
backpropaga tion was unsuccessful at finding solutions. For the eight solutions
discussed here, two parameters were varied at the inputs. In some solutions the
utride was stimulated with a vedor sum of the displacement and the pressure components,
or a "mixed" input. In some solutions the hair cells in the utride are not distributed
uniformly, but in a gaussian manner with the mean tuning of 45 degrees to the
right or left, in the two ears respedively. This approximates the actual distribution
of hair cells in the goldfish utride (Platt, 1977). 3 RESULTS Analyzing the activation
of the hidden units as a fundion of input pattern we found activity consistent
with known physiology, nothing inconsistent with our knowledge of the system,
and some predidions to be evaluated during intracellular recordings from PHP cells
and auditory afFerents. First, many PHP cells were found exhibiting a logical
fUndion, which is consistent with our minimal model described above. These tended
to projed only to one Mauthner cell unit, which suggests that primarily the collateral
PHP cells will demonstrate logical properties. Most logical PHP units were NAND
gates with very large weights to one Mauthner cell. An example is a unit which
is on for all stimuli except those having displacements anywhere on the left when
pressure is Second, saccular fibers tended to be either sensitive to high or low
pressure, consis tent with recordings of Furukawa and Ishii (1967). In addition
there were a dass which looked like threshold fibers, highly active for all supra-threshold
stimuli, and inactive for all sub-threshold stimuli. There were some fibers with
no obvious se ledivity, as well. Third, utricular fibers often demonstrate sensitivity
for displacements exclusively from one side ofthe fish, consistent with our minimal
model. Right and left utricular fibers have not yet been demonstrated in the real
system. Utricular fibers also demonstrated more coarsely tuned, less interpretable
receptive fields. All solutions that included a mixed input to the utrieie, for
example, pro duced fibers that seemed to be "not 180 degree" ,or "not 0 degree",
countering the pressure vedors. We interpret these fibers as doing dean-up given
the absence of negative weights at that layer. Fourth, sub-threshold behavior
of units is not always consistent with their supra threshold behavior. At sub-threshold
levels of stimulation the adivity of units may not refted their computational
role in the behavior. Thus, intracellular recordings should explore stimulus ranges
known to elicit the behavior. 580 Guzik and Eaton Fifth, Mauthner units usually
receive very strong inputs from pressure fibers. This is consistent with physiological
recordings which suggest that the Mauthner cells in goldfish are more sensitive
to sound pressure than displacement (Canfield and Sixth, Mauthner cells always
acquired rdatively equal high negative biases. This is consistent with the known
low input resistance of the real Mauthner eells, giving them a high threshold
(Faber and Korn, 1978). Seventh, PHP cells that maintain substantial bilateral
connections tend to be ton ically active. These contribute additional negative
bias to the Mauthner cells. The relative sizes of the connections are often assymetric.
This suggests that the commis sural PHP cells serve primarily to regulate Mauthner
threshold, ensure behavioral response only to intense stimuli, consistent with
Faber and Korn (1978). These cells could only contribute to a partial solution
of the XNOR problem. Eighth, all solutions consistently used logic gate PHP units
for only 50 to 75 of the training examples. Probably distributed solutions relying
on the direct con nections of auditory nerve fibers to Mauthner cells were more
easily learned, and logic gate units only developed to handle the unsolved eases.
Cases solved without logic gate units were solved by assymetric projections to
the Mauthner cells of one polarity of pressure and one class of direction fibers,
left or right. Curiously, most of these eases involved a preferential projection
from high pressure fibers to the Mauthner units, along with directional fibers
encoding displacements from each Mauthner unit''s positive direction. This means
the logic gate units tended to handle the low pressure eases. This may be a result
of the presence of the assymetric distributions of utricular hair cells in 6 out
of the 8 solutions. 4 CONCLUSIONS Ve have generated predictions for the behavior
of neurons in the Mauthner system under different conditions of acoustic stimulation.
The predictions generated with our connectionist model are consistent with our
interpretation of the phase model for underwater sound localization in fishes
as a logical operator. The results are also consistent with previously described
properties of the Mauthner system. Though perhaps based on the characteristics
more of the training procedure, our solutions suggest that we may find a mixed
solution in the fish. Direct projections to the Mauthner cells from the auditory
nerve perhaps handle many of the commonly encountered acoustic threats. The results
of Blaxter (1981) support the idea that fish do escape from stimuli regardless
of the polarity of the initial pressure change. Without significant nonlinear
processing at the Mauthner cell itsdf, or more com plex processing in the auditory
fibers, direct connections could not handle all of these eases. These possibilities
deserve exploration. We propose different computational roles for the two classes
of inhibitory PHP neurons. We expect only unilaterally-projecting PHP cells to
demonstrate some logical function of pressure and particle motion. We believe
that some elements of the Mauthner system must be found to demonstrate such minimal
logical functions if the phase modd is an explanation for left-right discrimination
by the Mauthner system. Directional Hearing by the Mauthner System 581 We are
currently preparing to deliver controlled acoustic stimuli to goldfish during
acute intracellular recording procedures from the PHP neurons, the afferent fibers
and the Mauthner cells. Our insights from this model will greatly assist us in
designing the stimulus regimen, and in interpreting our experimental results.
Plans for future computational work are of a dynamic model that will include the
results of these physiological investigations, as well as a more realistic version
of the Mauthner .Acknowledgements We are grateful for the technical assistance
of members of the Boulder Connectionist Research Group, especially Don Mathis
for help in debugging and optimizing the original code. We thank P.L. Edds-Walton
for crucial discussions. This work was supported by a grant to RCE from the National
Institutes of Health (ROI NS22621). References Blader, J.H.S., J.A.B. Gray, and
E.J. Denton (1981) Sound and startle responses in herring shoals. J. Mar. BioI.
Assoc. UK, 61: 851-869 Canfield, J.G. and R.C. Eaton (1990) Swimbladder acoustic
pressure transduction intiates Mauthner-mediated escape. Nature, 37: 760-762 Eaton,
R.C., J.G. Canfield and A.L. Guzik (1994) Left-right discrimination of sound onset
by the Mauthner system. Brain Behav. Evol., in pre66 Eaton, R.C., R. DiDomenico
and J. Nissanov (1991) Role of the Mauthner cell in sensorimotor integration by
the brain stem escape network. Brain Behav. Evol., Faber, D.S. and H. Korn (1978)
Electrophysiology of the Mauthner cell: Basic properties, synaptic mechanisms
and associated networks. In Neurobiology of the Mauthner Cell, D.S. Faber and
H. Korn (eds) , Raven Press, NY, pp. 47-131 Fay, R.R.(1984) The goldfish ear codes
the axis of acoustic particle motion in three Furukawa, T. and Y. Ishii (1967)
Effects of static bending of sensory hairs on sound reception in the goldfish.
Japanese J. Physiol., 17: 572-588 Guzik, A.L. and R.C. Eaton (1993) The XNOR model
for directional hearing by the Mauthner system. Soc. Neurosci. Abstr. PIaU, C.
(1977) Hair cell distribution and orientation in goldfish otolith organs. J. Schuijf,
A. (1981) Models of acoustic localization. In Hearing and Sound Commu nication
in Fishes, W.N. Tavolga, A.N . Popper and R.R. Fay (eds.), Springer, New'
- source_sentence: Effect of input stimulus coding on self-supervised learning performance
sentences:
- 'INTRODUCTION Formal language learning (Gold, 1969) has been a topic of concern
for cognitive science and artificial intelligence. It is the task of inducing
a computational description of a formal language from a sequence of positive and
negative examples of strings in the target lan guage. Neural information processing
approaches to this problem involve the use of recur rent networks that embody
the internal state mechanisms underlying automata models (Cleeremans et aI., 1989;
Elman, 1990; Pollack, 1991; Giles et aI, 1992; Watrous Kuhn, 1992). Unlike traditional
automata-based approaches, learning systems relying on recurrent networks have
an additional burden: we are still unsure as to what these networks are doing.Some
researchers have assumed that the networks are learning to simulate finite state
machines (FSMs) in their state dynamics and have begun to extract FSMs from the
net works'' state transition dynamics (Cleeremans et al., 1989; Giles et al.,
1992; Watrous Kuhn, 1992). These extraction methods employ various clustering
techniques to partition the internal state space of the recurrent network into
a finite number of regions correspond ing to the states of a finite state automaton.
This assumption of finite state behavior is dangerous on two accounts. First,
these extrac tion techniques are based on a discretization of the state space
which ignores the basic def inition of information processing state. Second, discretization
can give rise to incomplete computational explanations of systems operating over
a continuous state space. SENSITIVITY TO INITIAL CONDITIONS In this section, I
will demonstrate how sensitivity to initial conditions can confuse an FSM extraction
system. The basis of this claim rests upon the definition of information processing
state. Information processing (lP) state is the foundation underlying automata
theory. Two IP states are the same if and only if they generate the same output
responses for all possible future inputs (Hopcroft Ullman, 1979). This definition
is the fulcrum for many proofs and techniques, including finite state machine
minimization. Any FSM extraction technique should embrace this definition, in
fact it grounds the standard FSM minimization methods and the physical system
modelling of Crutchfield and Young (Crutchfield Young, 1989). Some dynamical
systems exhibit exponential divergence for nearby state vectors, yet remain confined
within an attractor. This is known as sensitivity to initial conditions. If this
divergent behavior is quantized, it appears as nondeterministic symbol sequences
(Crutch field Young, 1989) even though the underlying dynamical system is completely
deter ministic (Figure 1). Consider a recurrent network with one output and three
recurrent state units. The output unit performs a threshold at zero activation
for state unit one. That is, when the activation of the first state unit of the
current state is less than zero then the output is A. Otherwise, the output is
B. Equation 1 presents a mathematical description. Set) is the current state of
the system 0 (t) is the current output. Figure 2 illustrates what happens when
you run this network for many iterations. The point in the upper left hand state
space is actually a thousand individual points all within a ball of radius 0.01.
In one iteration these points migrate down to the lower corner of the state space.
Notice that the ball has elongated along one dimension. After ten iterations the
orig inal ball shape is no longer visible. After seventeen, the points are beginning
to spread along a two dimensional sheet within state space. And by fifty iterations,
we see the net work reaching the its full extent of in state space. This behavior
is known as sensitivity to initial conditions and is one of three conditions which
have been used to characterize cha otic dynamical systems (Devaney, 1989). In
short, sensitivity to initial conditions implies Fool''s Gold: Extracting Finite
State Machines from Recurrent Network Dynamics 503 Figure 1: Examples of deterministic
dynamical systems whose discretize trajectories appear nondeterministic. that
any epsilon ball on the attractor of the dynamical will exponentially diverge,
yet still be contained within the locus of the attractor. The rate of this divergence
is illustrated in Figure 3 where the maximum distance between two points is plotted
with respect to the number of iterations. Note the exponential growth before saturation.
Saturation occurs as the point cloud envelops the attractor. No matter how small
one partitions the state space, sensitivity to initial conditions will eventually
force the extracted state to split into multiple trajectories independent of the
future input sequence. This is characteristic of a nondeterministic state transition.
Unfortu nately, it is very difficult, and probably intractable, to differentiate
between a nondetermin istic system with a small number of states or a deterministic
with large number of states. In certain cases, however, it is possible to analytically
ascertain this distinction (Crutchfield THE OBSERVERS'' PARADOX One response to
this problem is to evoke more computationally complex models such as push-down
or linear-bounded automata. Unfortunately, the act of quantization can actually
introduce both complexion and complexity in the resulting symbol sequence. Pollack
and I have focused on a well-hidden problems with the symbol system approach to
understand ing the computational powers of physical systems. This work (Kolen Pollack,
1993; S04 Kolen outputA 1 Start (eO.Ol) outputA,B 1 17 iterations outputB 1 1
iteration outputA,B 1 25 iterations outputA 1 10 iterations 50 iterations Figure
2: The state space of a recurrent network whose next state transitions are sensitive
to initial conditions. The initial epsilon ball contains 1000 points. These points
first straddle the output decision boundary at iteration seven. Kolen Pollack,
In press) demonstrated that computational complexity, in terms of Chom sky''s
hierarchy of formal languages (Chomsky, 1957; Chomsky, 1965) and Newell and Simon''s
physical symbol systems (Newell Simon, 1976), is not intrinsic to physical sys
tems. The demonstration below shows how apparently trivial changes in the partitioning
of state space can produce symbol sequences from varying complexity classes. Consider
a point moving in a circular orbit with a fixed rotational velocity, such as the
end of a rotating rod spinning around a fixed center, or imagine watching a white
dot on a spin ning bicycle wheel. We measure the location of the dot by periodically
sampling the loca tion with a single decision boundary (Figure 4, left side).
If the point is to the left of boundary at the time of the sample, we write down
an "1". Likewise, we write down an "r" when the point is on the other side. (The
probability of the point landing on the boundary is zero and can arbitrarily be
assigned to either category without affecting the results below.) In the limit,
we will have recorded an infinite sequence of symbols containing long sequences
of r''s and l''s. The specific ordering of symbols observed in a long sequence
of multiple rotations is Fool''s Gold: Extracting Finite State Machines from Recurrent
Network Dynamics 505 Figure 3: Spread of initial points across the attractor as
measured by maximum distance. Figure 4: On the left, two decision regions which
induce a context free language. 9 is the current angle of rotation. At the time
of sampling, if the point is to the left (right) of the dividing line, an 1 (r)
is generated. On the right, three decision regions which induce a context sensitive
language. dependent upon the initial rotational angle of the system. However,
the sequence does pos sess a number of recurring structural regularities, which
we call sentences: a run of r''s fol lowed by a run of l''s. For a fixed rotational
velocity (rotations per time unit) and sampling rate, the observed system will
generate sentences of the form r n1 m (n, m 0). (The notation rn indicates a
sequence of n r''s.) For a fixed sampling rate, each rotational velocity spec
ifies up to three sentences whose number of r''s and l''s differ by at most one.
These sen tences repeat in an arbitrary manner. Thus, a typical subsequence of
a rotator which produces sentences r n1 n, r n1 nl ,rn 11 n would look like 506
Kolen A language of sentences may be constructed by examining the families of
sentences gener ated by a large collection of individuals, much like a natural
language is induced from the abilities of its individual speakers. In this context,
a language could be induced from a pop ulation of rotators with different rotational
velocities where individuals generate sentences of the form {r"l n, r"l "1 ,r"ll"},
n O. The reSUlting language can be described by a context free grammar and has
unbounded dependencies; the number of 1 ''s is a function of the number of preceding
r''s. These two constraints on the language imply that the induced language is
context free. To show that this complexity class assignment is an artifact of
the observational mecha nism, consider the mechanism which reports three disjoint
regions: 1, c, and r (Figure 4, right side). Now the same rotating point will
generate sequences ofthe form For a fixed sampling rate, each rotational velocity
specifies up to seven sentences, rncffil k, when n, m, and k can differ no by
no more than one. Again, a language of sentences may be constructed containing
all sentences in which the number ofr''s, c''s, and l''s differs by no more than
one. The resulting language is context sensitive since it can be described by
a context sensitive grammar and cannot be context free as it is the finite union
of several context sensitive languages related to r"c"l n. CONCLUSION Using recurrent
neural networks as the representation underlying the language learning task has
revealed some inherent problems with the concept of this task. While formal languages
have mathematical validity, looking for language induction in physical systems
is question able, especially if that system operates with continuous internal
states. As I have shown, there are two major problems with the extraction of a
learned automata from our models. First, sensitivity to initial conditions produces
nondeterministic machines whose trajecto ries are specified by both the initial
state of the network and the dynamics of the state trans formation. The dynamics
provide the shape of the eventual attractor. The initial conditions specify the
allowable trajectories toward that attractor. While clustering methods work in
the analysis of feed-forward networks because of neighborhood preservation (as
each layer is a homeomorphism), they may fail when applied to recurrent network
state space trans formations. FSM construction methods which look for single transitions
between regions will not help in this case because the network eventually separates
initially nearby states across several FSM state regions. The second problem with
the extraction of a learned automata from recurrent network is that trivial changes
in observation strategies can cause one to induce behavioral descrip tions from
a wide range of computational complexity classes for a single system. It is the
researcher''s bias which determines that a dynamical system is equivalent to a
finite state automata. Fool''s Gold: Extracting Finite State Machines from Recurrent
Network Dynamics 507 One response to the first problem described above has been
to remove and eliminate the sources of nondeterminism from the mechanisms. Zeng
et. a1 (1993) corrected the second order recurrent network model by replacing
the continuous internal state transformation with a discrete step function. (The
continuous activation remained for training purposes.) This move was justified
by their focus on regular language learning, as these languages can be rec ognized
by finite state machines. This work is questionable on two points, however. First,
tractable algorithms already exist for solving this problem (e.g. Angluin, 1987).
Second, they claim that the network is self-clustering the internal states. Self-clustering
occurs only at the comers of the state space hypercube because of the discrete
activation function, in the same manner as a digital sequential circuit "clusters"
its states. Das and Mozer (1994), on the other hand, have relocated the clustering
algorithm. Their work focused on recurrent networks that perform internal clustering
during training. These networks operate much like competitive learning in feed-forward
networks (e.g. Rumelhart and Zipser, 1986) as the dynamics of the learning rules
constrain the state representations such that stable clusters emerge. The shortcomings
of finite state machine extraction must be understood with respect to the task
at hand. The actual dynamics of the network may be inconsequential to the final
prod uct if one is using the recurrent network as a pathway for designing a finite
state machine. In this engineering situation, the network is thrown away once
the FSM is extracted. Neural network training can be viewed as an "interior" method
to finding discrete solutions. It is interior in the same sense as linear programming
algorithms can be classified as either edge or interior methods. The former follows
the edges of the simplex, much like traditional FSM learning algorithms search
the space of FSMs. Internal methods, on the other hand, explore search spaces
which can embed the target spaces. Linear programming algorithms employing internal
methods move through the interior of the defined simplex. Likewise, recurrent
neural network learning methods swim through mechanisms with mUltiple finite state
interpretations. Some researchers, specifically those discussed above, have begun
to bias recurrent network learning to walk the edges (Zeng et al, 1993) or to
internally cluster states (Das Mozer, 1994). In order to understand the behavior
of recurrent networks, these devices should be regarded as dynamical systems (Kolen,
1994). In particular, most common recurrent networks are actually iterated mappings,
nonlinear versions of Barnsley''s iterated function systems (Barnsley, 1988).
While automata also fall into this class, they are a specialization of dynamical
systems, namely discrete time and state systems. Unfortunately, information processing
abstractions are only applicable within this domain and do not make any sense
in the broader domains of continuous time or continuous space dynamical systems.
Acknowledgments The research reported in this paper has been supported by Office
of Naval Research grant number NOOOI4-92-J-1195. I thank all those who have made
comments and suggestions for improvement of this paper, especially Greg Saunders
and Lee Giles. References Angluin, D. (1987). Learning Regular Sets from Queries
and Counterexamples. Information 508 Kolen Barnsley, M. (1988). Fractals Everywhere.
Academic Press: San Diego, CA. Chomsky, N. (1957). Syntactic Structures. The Hague:
Mounton Co. Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, Mass.:
MIT Press. Cleeremans, A, Servan-Schreiber, D. McClelland, J. L. (1989). Finite
state automata and simple recurrent networks. Neural Computation, 1,372-381. Crutchfield,
J. Young, K. (1989). Computation at the Onset of Chaos. In W. Zurek, (Ed.), Entropy,
Complexity, and the Physics of Information. Reading: Addison-Wesely. Das, R. Mozer,
M. (1994) A Hybrid Gradient-DescentClustering Technique for Finite State Machine
Induction. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, (Eds.), Advances
in Neural Information Processing Systems 6. Morgan Kaufman: San Francisco. Devaney,
R. L. (1989). An Introduction to Chaotic Dynamical Systems. Addison-Wesley. Elman,
J. (1990). Finding structure in time. Cognitive Science, 14, 179-211. and Learning
an Unknown Grammar with Recurrent Neural Networks. In John E. Moody, Steven J.
Hanson Richard P. Lippman, (Eds.), Advances in Neural Information Processing
Systems 4. Morgan Kaufman. Gold, E. M. (1969). Language identification in the
limit. Information and Control, 10,372- Hopcroft, J. E. Ullman, J. D. (1979).
Introduction to Automata Theory, Languages, and Computation. Addison-Wesely. Kolen,
J. F. (1994) Recurrent Networks: State Machines or Iterated Function Systems?
In M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, AS. Weigend (Eds.),
Proceedings of the 1993 Connectionist Models Summer School. (pp. 203-210) Hillsdale,
NJ: Erlbaum Associates. Kolen, J. F. Pollack, J. B. (1993). The Apparent Computational
Complexity of Physical Systems. In Proceedings of the Fifteenth Annual Conference
of the Cognitive Science Society. Laurence Earlbaum. Kolen, J. F. Pollack, J.
B. (In press) The Observers'' Paradox: The Apparent Computational Complexity of
Physical Systems. Journal of Experimental and Theoretical Artificial Intelli gence.
Pollack, J. B. (1991). The Induction Of Dynamical Recognizers. Machine Learning,
7.227- Newell, A. Simon, H. A (1976). Computer science as empirical inquiry:
symbols and search. Communications of the Associationfor Computing Machinery,
19, 113-126. Rumelhart, D. E., and Zipser, D. (1986). Feature Discovery by Competitive
Learning. In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, (Eds.),
Parallel Distributed Processing. Volume 1. 151-193. MIT Press: Cambridge, MA Watrous,
R. L. Kuhn, G. M. (1992). Induction of Finite-State Automata Using Second Order
Recurrent Networks. In John E. Moody, Steven J. Hanson Richard P. Lippman, (Eds.),
Advances in Neural Information Processing Systems 4. Morgan Kaufman. Zeng, Z.,
Goodman, R. M., Smyth, P. (1993). Learning Finite State Machines With Self-Clus
tering Recurrent Networks. Neural Computation, 5, 976-990 PART IV NEUROSCIENCE'
- 'INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control.
Consider an agent moving around a finite grid such as the one in figure 1 (the
agent is incapable of crossing the barrier) trying to reach a goal whose position
it does not know. If it can predict how far away from the goal it is at the current
step, and how far away from the goal it is at the next step, after making a move,
then it can decide whether or not that move was helpful or harmful. If, in addition,
it can record this fact, then it can learn how to navigate to the goal. This generation
of actions from predictions is closely related to the mechanism of dynamical programming.
TD is used to learn the predictions in the first place. Consider the agent moving
around randomly on the grid, receiving a negative reinforcement of -1 for every
move it makes apart from moves which take it onto the goal. In this case, if it
can estimat.e from every location it visits, how much reinforcement (discounted
by how soon it arrives) it will get before it next reaches the goal, it will be
predicting how far away it is, based on the random method of selecting actions.
TD''s mechanism of learning is to force the predictions to be consistent; the
prediction from location a should be -1 more than the average of the predictions
from the locations that can be reached in one step (hence the extra -1 reinforcement)
from a. 464 Navigating Through Temporal Difference 465 If the agent initially
selects each action with the same probability, then the estimate of future reinforcement
from a will be monotonically related to how many steps a is away from the goal.
This makes the predictions useful for criticising actions as above. In practice,
the agent will modify its actions according to this criticism at the same time
as learning the predictions based on those actions. Barto, Sutton and Watkins
[2] develop this example, and show how the TD mech anism coupled with a punctate
representation of the stimulus (referred to as''RBsw below) finds the optimal
paths to the goal. ''RBsw ignores the cues shown in figure 1, and devotes one
input unit to each location on the grid, which fires if and only if the agent
is at that place. TD methods can however work with more general codes. Section
2 considers al ternative representations, including ones that are sensitive to
the orientation of the agent as it moves through the grid, and section 3 looks
at a restricted form of la. tent learning - what the agent can divine about its
environment in the absence of reinforcement. Both techniques can improve the speed
of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by
which the agent finds out from the environ ment where it is, can be classified
along two dimensions; whether they are punctate or distributed, and whether they
are directionally sensitive or in register with the world. Over most of the grid,
a ''sensible'' distributed representation, such as a coarse-coded one, would be
expected to make learning faster, as information about the value and action functions
could be shared across adjacent grid points. There are points of discontinuity
in the actions, as in the region above the right hand arm of the barrier, but
they are few. In his PhD thesis [9], Watkins considers a rather similar problem
to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on
a CMAC [1] coarse-coded representation of the space. Since his agent moves in
a continuous bounded space, rather than being confined merely to discrete grid
points, something of this sort is anyway essential. After the initial learning,
Watkins arbitrarily makes the agent move ten times more slowly in a closed section
of the space. This has a similar effect to the barrier in inducing a discontinuity
in the action space. Despite the CMACS forcing the system to share information
across such discontinuities, they were able to learn the task quickly. The other
dimension over which representations may vary involves the extent to which they
are sensitive to the direction in which the agent is facing. This is of interest
if the agent must construe its location from the cues around the grid. In this
case, rather than moving North, South, East or West, which are actions registered
with the world, the agent should only move Ahead, Left or Right (Behind is disabled
as an additional constraint), whose effects are also orientation dependent. This,
together with the fact that the representation will be less compact (it having
a larger input dimensionality) should make learning slower. Dynamical programming
and its equivalents are notoriously subject to Bellman''s curse of dimensionality,
an engineering equivalent of exponential explosion in search. Table 1 shows four
possible representations classified along these two dimensions. 466 Dayan Coarse
ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve ''RBSW ''RCMAC
Table 1: Representations. ''RBSW is the representation Barto, Sutton and Watkins
used. R,x is punctate and directionally sensitive - it devotes four units to every
grid point, one of which fires for each possible orientation of the agent. ''RcIAC''
the equivalent of Watkins'' representation, was not simulated, because its capabilities
would not differ markedly from those of the mapping-based representation developed
in the next section. nA is rather different from the other representations; it
provides a test of a represen tation which is more directly associated with the
sensory information that might be available directly from the cues. Figure 2 shows
how ''RA works. Various identifiable cues, C 1 ... C c (c 7 in the figure) are
scattered around the outside of the grid, and the agent has a fictitious ''retina''
which rotates with it. This retina is divided into a number of angular buckets
(8 in the figure), and each bucket has c units, the iSh one of which responds
if the cue Ci is visible in that bucket. This representation is clearly directionally
sensitive (if the agent is facing a different way, then so is its retina, and
so no cue will be visible in the same bucket as it was before), and also distributed,
since in general more than one cue will be visible from every location. Note that
there is no restriction on the number of units that can fire in each bucket at
any time - more than one will fire if more than one cue is visible there. Also,
under the present system ''RA will in general not work if its coding is ambiguous
- grid points must be distinguishable. Finally, it should be clear that ''RA is
not biologically plausible. Figure 3 shows the learning curves for the three representations
simulated. Each point is generated by switching off the learning temporarily after
a certain number of iterations, starting the agent from everywhere in the grid,
and averaging how many steps it takes in getting to the goal over and above the
minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly,
that ''RA is actually better than ''RBSW . This implies that the added advantage
of its distributed na ture more than outweighs its disadvantages of having more
components and being directionally sensitive. One of the motivations behind studying
alternate representations is the experimen tal findings on place cells in the
hippocampi of rats (amongst other species). These are cells that fire only when
the rat is at a certain location in its environment. Although their existence
has led to many hypotheses about rat cognitive mapping (see [5J for a substantial
discussion of place cells and mapping), it is important to note that even with
a map, there remains the computational1y intensive problem of navigation addressed,
in this paper, by TD. ''RA, being closely related to the input stimuli is quite
unlike a place cell code - the other representations all bear some similarities.
Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems
with the TD system as described is that it is incapable oflatent learning in the
absence of reinforcement or a goal. If the goal is just taken away, but the -1
reinforcements are still applied at each step, then the values assigned to each
location will tend to -00. If both are removed, then although the agent will wander
about its environment with random gay abandon, it will not pick up anything that
could be used to speed subsequent learning. Latent learning experiments with rats
in dry mazes prove fairly conclusively that rats running mazes in the absence
of rewards and punishments learn almost as much as rats that are reinforced. One
way to solve this problem is suggested by Sutton''s DYNA architecture [7]. Briefly,
this constructs a map of place x action - next place, and takes steps in the fictitious
world constructed from its map in-between taking steps in the real world, as a
way of ironing out the computational ''bumps'' (ie inconsistencies) in the value
and action functions. Instead, it is possible to avoid constructing a complete
map by altering the repre sentation of the environment used for learning the prediction
function and optimal actions. The section on representations concluded that coarse-coded
representations are generally better than punctate ones, since information can
be shared between neighbouring points. However, not all neighbouring points are
amenable to this sharing, because of discontinuities in the value and action functions.
If there were a way of generating a coarse coded representation (generally from
a punctate one) that is sensitive to the structure of the task, rather than arbitrarily
assigned by the environment, it should provide the base for faster learning still.
In this case, neighbouring points should only be coded together if they are not
separated by the barrier. The initial exploration would allow the agent to learn
this much about the structure of the environment. Consider a set of units whose
job is to predict the future discounted sum of firings of the raw input lines.
Using ''R.Bsw during the initial stage of learning when the act.ions are still
random, if the agent is at location (3,3) of the grid, say, then the discounted
prediction of how often it will be in (3,4) (ie the frequency with which the single
unit representing (3,4) will fire) will be high, since this location is close.
However, the prediction for (7,11) will be low, because it is very unlikely to
get there quickly. Consider the effect of the barrier: locations on opposite sides
of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric
on the grid, are far apart in the task. This means that the discounted prediction
of how often the agent will be at (1,6) given that it starts at (2,6), will be
proportionately lower. Overall, the prediction units should act like a coarse
code, sensitive to the struc ture of the task. As required, this information about
the environment is entirely independent of whether or not the agent is reinforced
during its exploration. In fact, the resulting ''map'' will be more accurate if
it is not, as its exploration will be more random. The output of the prediction
units is taken as an additional source of information for the value and action
functions. Since their main aim is to create intelligently distributed representations
from punc tate ones, it is only appropriate to use these prediction units for
''RBsw and ''R4X '' Figure 4 compares average learning curves for ''RBsw with
and without these ex-468 Dayan tra mapping units, and with and without 6000 steps
of latent learning (LL) in the absence of any reinforcement. A significant improvement
is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation!
after a few un-reinforced iterations. The predictions are clearly fairly well
developed and smooth - a predictable exponentially decaying hump. The only deviations
from this are at the barrier and along the edges, where the effects of impermeability
and immobility are apparent. Figure 6 shows the same set of predictions but after
2000 reinforced iterations, by which time the agent reaches the goal almost optimally.
The predictions degenerate from being roughly radially symmetric (bar the barrier)
to being highly asymmetric. Once the agent has learnt how to get to the goal from
some location, the path it will follow, and so the locations it will visit from
there, is largely fixed. The asymptotic values of the predictions will therefore
be 0 for units not on the path, and -( for those on the path, where r is the number
of steps since the agent''s start point and ''Y is the discounting factor weighting
immediate versus distant reinforcement. This is a severe limitation since it implies
that the topological information present in the early stages of learning disappears
evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION
Navigation comprises two problems; where the agent and the goals in its environ
ment are, and how it can get to them. Having some form of cognitive map, as is
suggested by the existence of place cells, addresses the first, but leaves open
the second. For the case of one goal, the simple TD method described here is one
solution. TD planning methods are clearly robust to changes in the way the input
stimu lus is represented. Distributed codes, particularly ones that allow for
the barrier, make learning faster. This is even true for 1lA'' which is sensitive
to the orientation of the agent. All these results require each location to have
a unique representa tion - Mozer and Bachrach [4] and Chrisley [3] and references
therein look at how ambiguities can be resolved using information on the sequence
of states the agent traverses. Since these TD planning methods are totally general,
just like dynamical program ming, they are unlikely to scale well. Some evidence
for this comes from the rel atively poor performance of 1l.x , with its quadrupled
input dimension. This puts the onus back either onto dividing the task into manageable
chunks, or onto more sophisticated representation. A cknow ledgements I am very
grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw,
and the attendees of the PDP Workshop at Edinburgh, the Con nectionist Group at
Amherst, and a spatial learning workshop at King''s College Cambridge for their
helpful comments. This work was funded by SERC. 1 Note that these are normalised
to a maximum value of 10, for graphical convenience. Navigating Through Temporal
Difference 469 References [1] Albus, JS (1975). A new approach to manipulator
control: the Cerebellar Model Articulation Controller (CMAC). Transactions of
the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227.
[2] Barto, AG, Sutton, RS . Watkins, CJCH (1989). Learning and Sequential Decision
Making. Technical Report 89-95, Computer and Information Science, University of
Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction
and use: A parallel dis tributed approach. In DS Touretzky, J Elman, TJ Sejnowski,
. GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School.
San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, . Bachrach, J (1990). Discovering
the structure of a reactive en vironment by exploration. In D Touretzky, editor,
Advances in Neurallnfor mation Processing Systems, , pp 439-446. San Mateo, CA:
Morgan Kaufmann. [5] O''Keefe, J Nadel, L (1978). The Hippocampus as a Cognitive
Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning
to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44.
[7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting
based on approximating dynamic progranuning. In Proceedings of the Seventh International
Conference on Machine Learning. San Mateo, CA: Morgan Kauf [8] Sutton, RS, . Barto,
AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel
. JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT
Press. [9J Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. Univer
sity of Cambridge, England. Agall arrier OriCIIlltloD ''Retina'' Anplar bucket
Dot rlrina 1. flrina Fig 2: The ''retina'' for 1lA 470 Dayan Average extra steps
to goal Learning iterations Fig 3: Different representations Fig 5: Initial predictions
from (5,6) Average extra steps to goal Learning iterations Fig 4: Mapping with
''RBSW Fig 6: Predictions after 2000 iterations'
- 'Introduction Hand-written digit recognition has become one of the touchstone
problems in neural networks recently. Large databases of training examples such
as the NIST (National Institute of Standards and Technology) Special Database
3 have become available, and real-world applications with clear practical value,
such as recognizing zip codes in letters, have emerged. Diverse architectures
with varying learning rules have been proposed, including feed-forward networks
(Denker et al. 1989; Ie Cun et al. 1990; Martin and Pittman 1990), self-organizing
maps (Allinson et al. 1994), and dedicated approaches such as the neocognitron
(Fukushima and Wake 1990). The problem is difficult because handwriting varies
a lot, some digits are easily confusable, and recognition must be based on small
but crucial differences. For ex ample, the digits 3 and 8, 4 and 9, and 1 and
7 have several overlapping segments, and the differences are often lost in the
noise. Thus, hand-written digit recogni tion can be seen as a process of identifying
the distinct features and producing an internal representation where the significant
differences are magnified, making the recognition easier. Laterally Interconnected
Self-organizing Maps in Handwritten Digit Recognition 737 In this paper, the Laterally
Interconnected Synergetically Self-Organizing Map ar chitecture (LISSOM; Sirosh
and Miikkulainen 1994, 1995, 1996) was employed to form such a separable representation.
The lateral inhibitory connections of the LIS SOM map decorrelate features in
the input, retaining only those differences that are the most significant. Using
LISSOM as a front end, the actual recognition can be performed by any standard
neural network architecture, such as the perceptron. The experiments showed that
while direct recognition of the digit bitmaps with a simple percept ron network
is successful 72.3 of the time, and recognizing them using a standard self-organizing
map (SOM) as the front end 84.1 of the time, the recognition rate is 88.1 based
on the LISSOM network. These results suggest that LISSOM can serve as an effective
front end for real-world handwritten character recognition systems. 2 The Recognition
System 2.1 Overall architecture The system consists of two networks: a 20 x 20
LISSOM map performs the feature analysis and decorrelation of the input, and a
single layer of 10 perceptrons the final recognition (Figure 1 (a)). The input
digit is represented as a bitmap on the 32 x 32 input layer. Each LISSOM unit
is fully connected to the input layer through the af ferent connections, and to
the other units in the map through lateral excitatory and inhibitory connections
(Figure 1 (b)). The excitatory connections are short range, connecting only to
the closest neighbors of the unit, but the inhibitory connections cover the whole
map . The percept ron layer consists of 10 units, corresponding to digits 0 to
9. The perceptrons are fully connected to the LISSOM map, receiv ing the full
activation pattern on the map as their input. The perceptron weights are learned
through the delta rule, and the LISSOM afferent and lateral weights through Hebbian
learning. 2.2 LISSOM Activity Generation and Weight Adaptation The afferent and
lateral weights in LISSOM are learned through Hebbian adapta tion. A bitmap image
is presented to the input layer, and the initial activity of the map is calculated
as the weighted sum of the input. For unit (i, j), the initial response TJij IS
where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight
connecting input unit ( a, b) to map unit (i, j), and (7 is a piecewise linear
approximation of the sigmoid activation function. The activity is then settled
through the lateral connections. Each new activity TJij (t) at step t depends
on the afferent activation and the lateral excitation and inhibition: where Eij,kl
and Iij,kl are the excitatory and inhibitory connection weights from map unit
(k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k, I) during the previous
time step. The constants Ie and Ii control the relative strength of the lateral
excitation and inhibition. After the activity has settled, the afferent and lateral
weights are modified according to the Hebb rule. Afferent weights are normalized
so that the length of the weight 738 Y. CHOE, J. SIROSH, R. MIIKKULAINEN Output
Layer (10) tII''d Units with excitatory lateral connections to (iJ) Units with
inhibitory lateral connections to (iJ) Figure 1: The system architecture. (a)
The input layer is activated according to the bitmap image of digit 6. The activation
propagates through the afferent connections to the LISSOM map, and settles through
its lateral connections into a stable pattern. This pattern is the internal representation
of the input that is then recognized by the perceptron layer. Through ,the connections
from LISSOM to the perceptrons, the unit representing 6 is strongly activated,
with weak activations on other units such as 3 and 8. (b) The lateral connections
to unit (i, j), indicated by the dark square, are shown. The neighborhood of excitatory
connections (lightly shaded) is elevated from the map for a clearer view. The
units in the excitatory region also have inhibitory lateral connections (indicated
by medium shading) to the center unit. The excitatory radius is 1 and the inhibitory
radius vector remains the same; lateral weights are normalized to keep the sum
of weights constant (Sirosh and Miikkulainen 1994): IllJ,mn - VLmn[llij,mn(t) crinp1]ijmnF''
(3) where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i,
j), and crinp is the input learning rate; Wij ,kl is the lateral weight (either
excitatory Eij ,kl or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and
cr is the lateral learning rate (either crexc or crinh). 2.3 Percept ron Output
Generation and Weight Adaptation The perceptrons at the output of the system receive
the activation pattern on the LISSOM map as their input. The perceptrons are trained
after the LISSOM map has been organized. The activation for the perceptron unit
Om is where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m
is the connection weight between LISSOM map unit (i,j) and output layer unit m.
The delta rule is used to train the perceptrons: the weight adaptation is proportional
to the map activity and the difference between the output and the target: where
crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map
unit activity, (m is the target activation for unit m. ((m 1 if the correct digit
m, 0 otherwise). Laterally Interconnected Self-organizing Maps in Handwritten
Digit Recognition 739 I Representation I Training Test Table 1: Final Recognition
Results. The average recognition percentage and its variance over the 10 different
splits are shown for the training and test sets. The differences in each set are
statistically significant with p .9999. 3 Experiments A subset of 2992 patterns
from the NIST Database 3 was used as training and testing data.1 The patterns
were normalized to make sure taht each example had an equal effect on the LISSOM
map (Sirosh and Miikkulainen 1994). LISSOM was trained with 2000 patterns. Of
these, 1700 were used to train the perceptron layer, and the remaining 300 were
used as the validation set to determine when to stop training the perceptrons.
The final recognition performance of the whole system was measured on the remaining
992 patterns, which neither LISSOM nor the perceptrons had seen during training.
The experiment was repeated 10 times with different random splits of the 2992
input patterns into training, validation, and testing sets. The LISSOM map can
be organized starting from initially random weights. How ever, if the input dimensionality
is large, as it is in case of the 32 X 32 bitmaps, each unit on the map is activated
roughly to the same degree, and it is difficult to bootstrap the self-organizing
process (Sirosh and Miikkulainen 1994, 1996). The standard Self-Organizing Map
algorithm can be used to preorganize the map in this case. The SOM performs preliminary
feature analysis of the input, and forms a coarse topological map of the input
space. This map can then be used as the starting point for the LISSOM algorithm,
which modifies the topological organi zation and learns lateral connections that
decorrelate and represent a more clear categorization of the input patterns. The
initial self-organizing map was formed in 8 epochs over the training set, grad
ually reducing the neighborhood radius from 20 to 8. The lateral connections were
then added to the system, and over another 30 epochs, the afferent and lateral
weights of the map were adapted according to equations 3 and 4. In the beginning,
the excitation radius was set to 8 and the inhibition radius to 20. The excitation
radius was gradually decreased to 1 making the activity patterns more concentrated
and causing the units to become more selective to particular types of input pat
terns. For comparison, the initial self-organized map was also trained for another
30 epochs, gradually decreasing the neighborhood size to 1 as well. The final
afferent weights for the SOM and LISSOM maps are shown in figures 2 and 3. After
the SOM and LISSOM maps were organized, a complete set of activation patterns
on the two maps were collected. These patterns then formed the training input
for the perceptron layer. Two separate versions were each trained for 500 epochs,
one with SOM and the other with LISSOM patterns. A third perceptron layer was
trained directly with the input bitmaps as well. Recognition performance was measured
by counting how often the most highly ac tive perceptron unit was the correct
one. The results were averaged over the 10 different splits. On average, the final
LISSOMperceptron system correctly recog nized 88.1 of the 992 pattern test sets.
This is significantly better than the 84.1 1 Downloadable at ftp:j jsequoyah.ncsl.nist.gov
jpubjdatabasesj. 740 Y . CHOE, J. SIROSH, R. MIIKKULAINEN Figure 2: Final Afferent
Weights of the SOM map . The digit-like patterns represent the afferent weights
of each map unit projected on the input layer. For example, the lower left corner
represents the afferent weights of unit (0,0). High weight values are shown in
black and low in white. The pattern of weights shows the input pattern to which
this unit is most sensitive (6 in this case). There are local clusters sensitive
to each digit category. of the SOMperceptron system, and the 72.3 achieved by
the perceptron layer alone (Table 1). These results suggest that the internal
representations generated by the LISSOM map are more distinct and easier to recognize
than the raw input patterns and the representations generated by the SOM map .
4 Discussion The architecture was motivated by the hypothesis that the lateral
inhibitory con nections of the LISSOM map would decorrelate and force the map
activity patterns to become more distinct. The recognition could then be performed
by even the simplest classification architectures, such as the perceptron. Indeed,
the LISSOM representations were easier to recognize than the SOM patterns, which
lends evi dential support to the hypothesis. In additional experiments, the percept
ron output layer was replaced by a two-weight-Iayer backpropagation network and
a Hebbian associator net, and trained with the same patterns as the perceptrons.
The recog nition results were practically the same for the perceptron, backpropagation,
and Hebbian output networks, indicating that the internal representations formed
by the LISSOM map are the crucially important part of the recognition system.
A comparison of the learning curves reveals two interesting effects (figure 4).
First, even though the perceptron net trained with the raw input patterns initially
per forms well on the test set, its generalization decreases dramatically during
training. This is because the net only learns to memorize the training examples,
which does not help much with new noisy patterns. Good internal representations
are there fore crucial for generalization. Second , even though initially the
settling process of the LISSOM map forms patterns that are significantly easier
to recognize than Laterally Interconnected Self-organizing Maps in Handwritten
Digit Recognition 741 Figure 3: Final Afferent Weights of the LISSOM map. The
squares identify the above-average inhibitory lateral connections to unit (10,4)
(indicated by the thick square). Note that inhibition comes mostly from areas
of similar functionality (i.e. areas sensitive to similar input), thereby decorrelating
the map activity and forming a sparser representation of the input. the initial,
unsettled patterns (formed through the afferent connections only), this difference
becomes insignificant later during training. The afferent connections are modified
according to the final, settled patterns, and gradually learn to anticipate the
decorrelated internal representations that the lateral connections form. 5 Conclusion
The experiments reported in this paper show that LISSOM forms internal represen
tations of the input patterns that are easier to categorize than the raw inputs
and the patterns on the SOM map, and suggest that LISSOM can form a useful front
end for character recognition systems, and perhaps for other pattern recognition
systems as well (such as speech). The main direction of future work is to apply
the approach to larger data sets, including the full NIST 3 database, to use a
more powerful recognition network instead of the perceptron, and to increase the
map size to obtain a richer representation of the input space. Acknowledgements
This research was supported in part by National Science Foundation under grant
IRI-9309273. Computer time for the simulations was provided by the Pittsburgh
Supercomputing Center under grants IRI930005P and IRI940004P, and by a High Performance
Computer Time Grant from the University of Texas at Austin. References Allinson,
N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of self organising
maps. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems
6. San Mateo, CA: Morgan Kaufmann. 742 Y. CHOE. J. SIROSH. R. MIIKKULAINEN Comparison:Test
''SettIEiCLlSSOU'' - Epochs Figure 4: Comparison of the learning curves, A perceptron
network was trained to recognize four different kinds of internal representations:
the settled LISSOM patterns, the LISSOM patterns before settling, the patterns
on the final SOM network, and raw input bitmaps. The recognition accuracy on the
test set was then measured and averaged over 10 simulations. The generalization
of the raw input perceptron system decreases rapidly as the net learns to memorize
the training patterns. The difference of using settled and unsettled LISSOM patterns
diminishes as the afferent weights of LISSOM learn to take into account the decorrelation
performed by the lateral weights. Denker, J. S., Gardner, W. R., Graf, H. P.,
Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon,
I. (1989). Neural network recognizer for hand-written zip code digits. In Touretzky,
D . S., editor, Advances in Neural Information Processing Systems 1. San Mateo,
CA: Morgan Kaufmann . Fukushima, K., and Wake, N. (1990). Alphanumeric character
recognition by neocognitron. In Advanced Neural Computers, 263-270. Elsevier Science
Pub lishers B.V . (North-Holland). Ie Cun, Y., Boser, B ., Denker, J. S., Henderson,
D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990). Handwritten digit recognition
with a back propagation network. In Touretzky, D. S., editor, Advances in Neural
Infor mation Processing Systems 2. San Mateo, CA: Morgan Kaufmann . Martin, G.
L ., and Pittman, J. A. (1990). Recognizing hand-printed letters and digits. In
Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2.
San Mateo, CA: Morgan Kaufmann. Sirosh, J., and Miikkulainen, R. (1994). Cooperative
self-organization of afferent and lateral connections in cortical maps . Biological
Cybernetics, 71:66-78. Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance
and patterned lateral connections in a self-organizing model of the primary visual
cortex. In Tesauro, G ., Touretzky, D. S., and Leen, T . K., editors, Advances
in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Sirosh,
J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned lateral
interaction in a self-organizing model of the primary visual cortex. Neu ral Computation
(in press).'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@10
- cosine_precision@10
- cosine_recall@10
- cosine_ndcg@5
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@10
model-index:
- name: SentenceTransformer based on NovaSearch/stella_en_400M_v5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@10
value: 0.9466
name: Cosine Accuracy@10
- type: cosine_precision@10
value: 0.09466
name: Cosine Precision@10
- type: cosine_recall@10
value: 0.9466
name: Cosine Recall@10
- type: cosine_ndcg@5
value: 0.8507439067474944
name: Cosine Ndcg@5
- type: cosine_ndcg@10
value: 0.8602810144357889
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8322816666666671
name: Cosine Mrr@10
- type: cosine_map@10
value: 0.8322816666666666
name: Cosine Map@10
---
# SentenceTransformer based on NovaSearch/stella_en_400M_v5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) <!-- at revision dcae70d3f2b4aaee36afc3cde638ca4614497aec -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Effect of input stimulus coding on self-supervised learning performance',
"INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control. Consider an agent moving around a finite grid such as the one in figure 1 (the agent is incapable of crossing the barrier) trying to reach a goal whose position it does not know. If it can predict how far away from the goal it is at the current step, and how far away from the goal it is at the next step, after making a move, then it can decide whether or not that move was helpful or harmful. If, in addition, it can record this fact, then it can learn how to navigate to the goal. This generation of actions from predictions is closely related to the mechanism of dynamical programming. TD is used to learn the predictions in the first place. Consider the agent moving around randomly on the grid, receiving a negative reinforcement of -1 for every move it makes apart from moves which take it onto the goal. In this case, if it can estimat.e from every location it visits, how much reinforcement (discounted by how soon it arrives) it will get before it next reaches the goal, it will be predicting how far away it is, based on the random method of selecting actions. TD's mechanism of learning is to force the predictions to be consistent; the prediction from location a should be -1 more than the average of the predictions from the locations that can be reached in one step (hence the extra -1 reinforcement) from a. 464 Navigating Through Temporal Difference 465 If the agent initially selects each action with the same probability, then the estimate of future reinforcement from a will be monotonically related to how many steps a is away from the goal. This makes the predictions useful for criticising actions as above. In practice, the agent will modify its actions according to this criticism at the same time as learning the predictions based on those actions. Barto, Sutton and Watkins [2] develop this example, and show how the TD mech anism coupled with a punctate representation of the stimulus (referred to as'RBsw below) finds the optimal paths to the goal. 'RBsw ignores the cues shown in figure 1, and devotes one input unit to each location on the grid, which fires if and only if the agent is at that place. TD methods can however work with more general codes. Section 2 considers al ternative representations, including ones that are sensitive to the orientation of the agent as it moves through the grid, and section 3 looks at a restricted form of la. tent learning - what the agent can divine about its environment in the absence of reinforcement. Both techniques can improve the speed of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by which the agent finds out from the environ ment where it is, can be classified along two dimensions; whether they are punctate or distributed, and whether they are directionally sensitive or in register with the world. Over most of the grid, a 'sensible' distributed representation, such as a coarse-coded one, would be expected to make learning faster, as information about the value and action functions could be shared across adjacent grid points. There are points of discontinuity in the actions, as in the region above the right hand arm of the barrier, but they are few. In his PhD thesis [9], Watkins considers a rather similar problem to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on a CMAC [1] coarse-coded representation of the space. Since his agent moves in a continuous bounded space, rather than being confined merely to discrete grid points, something of this sort is anyway essential. After the initial learning, Watkins arbitrarily makes the agent move ten times more slowly in a closed section of the space. This has a similar effect to the barrier in inducing a discontinuity in the action space. Despite the CMACS forcing the system to share information across such discontinuities, they were able to learn the task quickly. The other dimension over which representations may vary involves the extent to which they are sensitive to the direction in which the agent is facing. This is of interest if the agent must construe its location from the cues around the grid. In this case, rather than moving North, South, East or West, which are actions registered with the world, the agent should only move Ahead, Left or Right (Behind is disabled as an additional constraint), whose effects are also orientation dependent. This, together with the fact that the representation will be less compact (it having a larger input dimensionality) should make learning slower. Dynamical programming and its equivalents are notoriously subject to Bellman's curse of dimensionality, an engineering equivalent of exponential explosion in search. Table 1 shows four possible representations classified along these two dimensions. 466 Dayan Coarse ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve 'RBSW 'RCMAC Table 1: Representations. 'RBSW is the representation Barto, Sutton and Watkins used. R,x is punctate and directionally sensitive - it devotes four units to every grid point, one of which fires for each possible orientation of the agent. 'RcIAC' the equivalent of Watkins' representation, was not simulated, because its capabilities would not differ markedly from those of the mapping-based representation developed in the next section. nA is rather different from the other representations; it provides a test of a represen tation which is more directly associated with the sensory information that might be available directly from the cues. Figure 2 shows how 'RA works. Various identifiable cues, C 1 ... C c (c 7 in the figure) are scattered around the outside of the grid, and the agent has a fictitious 'retina' which rotates with it. This retina is divided into a number of angular buckets (8 in the figure), and each bucket has c units, the iSh one of which responds if the cue Ci is visible in that bucket. This representation is clearly directionally sensitive (if the agent is facing a different way, then so is its retina, and so no cue will be visible in the same bucket as it was before), and also distributed, since in general more than one cue will be visible from every location. Note that there is no restriction on the number of units that can fire in each bucket at any time - more than one will fire if more than one cue is visible there. Also, under the present system 'RA will in general not work if its coding is ambiguous - grid points must be distinguishable. Finally, it should be clear that 'RA is not biologically plausible. Figure 3 shows the learning curves for the three representations simulated. Each point is generated by switching off the learning temporarily after a certain number of iterations, starting the agent from everywhere in the grid, and averaging how many steps it takes in getting to the goal over and above the minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly, that 'RA is actually better than 'RBSW . This implies that the added advantage of its distributed na ture more than outweighs its disadvantages of having more components and being directionally sensitive. One of the motivations behind studying alternate representations is the experimen tal findings on place cells in the hippocampi of rats (amongst other species). These are cells that fire only when the rat is at a certain location in its environment. Although their existence has led to many hypotheses about rat cognitive mapping (see [5J for a substantial discussion of place cells and mapping), it is important to note that even with a map, there remains the computational1y intensive problem of navigation addressed, in this paper, by TD. 'RA, being closely related to the input stimuli is quite unlike a place cell code - the other representations all bear some similarities. Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems with the TD system as described is that it is incapable oflatent learning in the absence of reinforcement or a goal. If the goal is just taken away, but the -1 reinforcements are still applied at each step, then the values assigned to each location will tend to -00. If both are removed, then although the agent will wander about its environment with random gay abandon, it will not pick up anything that could be used to speed subsequent learning. Latent learning experiments with rats in dry mazes prove fairly conclusively that rats running mazes in the absence of rewards and punishments learn almost as much as rats that are reinforced. One way to solve this problem is suggested by Sutton's DYNA architecture [7]. Briefly, this constructs a map of place x action - next place, and takes steps in the fictitious world constructed from its map in-between taking steps in the real world, as a way of ironing out the computational 'bumps' (ie inconsistencies) in the value and action functions. Instead, it is possible to avoid constructing a complete map by altering the repre sentation of the environment used for learning the prediction function and optimal actions. The section on representations concluded that coarse-coded representations are generally better than punctate ones, since information can be shared between neighbouring points. However, not all neighbouring points are amenable to this sharing, because of discontinuities in the value and action functions. If there were a way of generating a coarse coded representation (generally from a punctate one) that is sensitive to the structure of the task, rather than arbitrarily assigned by the environment, it should provide the base for faster learning still. In this case, neighbouring points should only be coded together if they are not separated by the barrier. The initial exploration would allow the agent to learn this much about the structure of the environment. Consider a set of units whose job is to predict the future discounted sum of firings of the raw input lines. Using 'R.Bsw during the initial stage of learning when the act.ions are still random, if the agent is at location (3,3) of the grid, say, then the discounted prediction of how often it will be in (3,4) (ie the frequency with which the single unit representing (3,4) will fire) will be high, since this location is close. However, the prediction for (7,11) will be low, because it is very unlikely to get there quickly. Consider the effect of the barrier: locations on opposite sides of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric on the grid, are far apart in the task. This means that the discounted prediction of how often the agent will be at (1,6) given that it starts at (2,6), will be proportionately lower. Overall, the prediction units should act like a coarse code, sensitive to the struc ture of the task. As required, this information about the environment is entirely independent of whether or not the agent is reinforced during its exploration. In fact, the resulting 'map' will be more accurate if it is not, as its exploration will be more random. The output of the prediction units is taken as an additional source of information for the value and action functions. Since their main aim is to create intelligently distributed representations from punc tate ones, it is only appropriate to use these prediction units for 'RBsw and 'R4X ' Figure 4 compares average learning curves for 'RBsw with and without these ex-468 Dayan tra mapping units, and with and without 6000 steps of latent learning (LL) in the absence of any reinforcement. A significant improvement is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation! after a few un-reinforced iterations. The predictions are clearly fairly well developed and smooth - a predictable exponentially decaying hump. The only deviations from this are at the barrier and along the edges, where the effects of impermeability and immobility are apparent. Figure 6 shows the same set of predictions but after 2000 reinforced iterations, by which time the agent reaches the goal almost optimally. The predictions degenerate from being roughly radially symmetric (bar the barrier) to being highly asymmetric. Once the agent has learnt how to get to the goal from some location, the path it will follow, and so the locations it will visit from there, is largely fixed. The asymptotic values of the predictions will therefore be 0 for units not on the path, and -( for those on the path, where r is the number of steps since the agent's start point and 'Y is the discounting factor weighting immediate versus distant reinforcement. This is a severe limitation since it implies that the topological information present in the early stages of learning disappears evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION Navigation comprises two problems; where the agent and the goals in its environ ment are, and how it can get to them. Having some form of cognitive map, as is suggested by the existence of place cells, addresses the first, but leaves open the second. For the case of one goal, the simple TD method described here is one solution. TD planning methods are clearly robust to changes in the way the input stimu lus is represented. Distributed codes, particularly ones that allow for the barrier, make learning faster. This is even true for 1lA' which is sensitive to the orientation of the agent. All these results require each location to have a unique representa tion - Mozer and Bachrach [4] and Chrisley [3] and references therein look at how ambiguities can be resolved using information on the sequence of states the agent traverses. Since these TD planning methods are totally general, just like dynamical program ming, they are unlikely to scale well. Some evidence for this comes from the rel atively poor performance of 1l.x , with its quadrupled input dimension. This puts the onus back either onto dividing the task into manageable chunks, or onto more sophisticated representation. A cknow ledgements I am very grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw, and the attendees of the PDP Workshop at Edinburgh, the Con nectionist Group at Amherst, and a spatial learning workshop at King's College Cambridge for their helpful comments. This work was funded by SERC. 1 Note that these are normalised to a maximum value of 10, for graphical convenience. Navigating Through Temporal Difference 469 References [1] Albus, JS (1975). A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC). Transactions of the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227. [2] Barto, AG, Sutton, RS . Watkins, CJCH (1989). Learning and Sequential Decision Making. Technical Report 89-95, Computer and Information Science, University of Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction and use: A parallel dis tributed approach. In DS Touretzky, J Elman, TJ Sejnowski, . GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School. San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, . Bachrach, J (1990). Discovering the structure of a reactive en vironment by exploration. In D Touretzky, editor, Advances in Neurallnfor mation Processing Systems, , pp 439-446. San Mateo, CA: Morgan Kaufmann. [5] O'Keefe, J Nadel, L (1978). The Hippocampus as a Cognitive Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44. [7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic progranuning. In Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kauf [8] Sutton, RS, . Barto, AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel . JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT Press. [9J Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. Univer sity of Cambridge, England. Agall arrier OriCIIlltloD 'Retina' Anplar bucket Dot rlrina 1. flrina Fig 2: The 'retina' for 1lA 470 Dayan Average extra steps to goal Learning iterations Fig 3: Different representations Fig 5: Initial predictions from (5,6) Average extra steps to goal Learning iterations Fig 4: Mapping with 'RBSW Fig 6: Predictions after 2000 iterations",
"Introduction Hand-written digit recognition has become one of the touchstone problems in neural networks recently. Large databases of training examples such as the NIST (National Institute of Standards and Technology) Special Database 3 have become available, and real-world applications with clear practical value, such as recognizing zip codes in letters, have emerged. Diverse architectures with varying learning rules have been proposed, including feed-forward networks (Denker et al. 1989; Ie Cun et al. 1990; Martin and Pittman 1990), self-organizing maps (Allinson et al. 1994), and dedicated approaches such as the neocognitron (Fukushima and Wake 1990). The problem is difficult because handwriting varies a lot, some digits are easily confusable, and recognition must be based on small but crucial differences. For ex ample, the digits 3 and 8, 4 and 9, and 1 and 7 have several overlapping segments, and the differences are often lost in the noise. Thus, hand-written digit recogni tion can be seen as a process of identifying the distinct features and producing an internal representation where the significant differences are magnified, making the recognition easier. Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 737 In this paper, the Laterally Interconnected Synergetically Self-Organizing Map ar chitecture (LISSOM; Sirosh and Miikkulainen 1994, 1995, 1996) was employed to form such a separable representation. The lateral inhibitory connections of the LIS SOM map decorrelate features in the input, retaining only those differences that are the most significant. Using LISSOM as a front end, the actual recognition can be performed by any standard neural network architecture, such as the perceptron. The experiments showed that while direct recognition of the digit bitmaps with a simple percept ron network is successful 72.3 of the time, and recognizing them using a standard self-organizing map (SOM) as the front end 84.1 of the time, the recognition rate is 88.1 based on the LISSOM network. These results suggest that LISSOM can serve as an effective front end for real-world handwritten character recognition systems. 2 The Recognition System 2.1 Overall architecture The system consists of two networks: a 20 x 20 LISSOM map performs the feature analysis and decorrelation of the input, and a single layer of 10 perceptrons the final recognition (Figure 1 (a)). The input digit is represented as a bitmap on the 32 x 32 input layer. Each LISSOM unit is fully connected to the input layer through the af ferent connections, and to the other units in the map through lateral excitatory and inhibitory connections (Figure 1 (b)). The excitatory connections are short range, connecting only to the closest neighbors of the unit, but the inhibitory connections cover the whole map . The percept ron layer consists of 10 units, corresponding to digits 0 to 9. The perceptrons are fully connected to the LISSOM map, receiv ing the full activation pattern on the map as their input. The perceptron weights are learned through the delta rule, and the LISSOM afferent and lateral weights through Hebbian learning. 2.2 LISSOM Activity Generation and Weight Adaptation The afferent and lateral weights in LISSOM are learned through Hebbian adapta tion. A bitmap image is presented to the input layer, and the initial activity of the map is calculated as the weighted sum of the input. For unit (i, j), the initial response TJij IS where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight connecting input unit ( a, b) to map unit (i, j), and (7 is a piecewise linear approximation of the sigmoid activation function. The activity is then settled through the lateral connections. Each new activity TJij (t) at step t depends on the afferent activation and the lateral excitation and inhibition: where Eij,kl and Iij,kl are the excitatory and inhibitory connection weights from map unit (k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k, I) during the previous time step. The constants Ie and Ii control the relative strength of the lateral excitation and inhibition. After the activity has settled, the afferent and lateral weights are modified according to the Hebb rule. Afferent weights are normalized so that the length of the weight 738 Y. CHOE, J. SIROSH, R. MIIKKULAINEN Output Layer (10) tII'd Units with excitatory lateral connections to (iJ) Units with inhibitory lateral connections to (iJ) Figure 1: The system architecture. (a) The input layer is activated according to the bitmap image of digit 6. The activation propagates through the afferent connections to the LISSOM map, and settles through its lateral connections into a stable pattern. This pattern is the internal representation of the input that is then recognized by the perceptron layer. Through ,the connections from LISSOM to the perceptrons, the unit representing 6 is strongly activated, with weak activations on other units such as 3 and 8. (b) The lateral connections to unit (i, j), indicated by the dark square, are shown. The neighborhood of excitatory connections (lightly shaded) is elevated from the map for a clearer view. The units in the excitatory region also have inhibitory lateral connections (indicated by medium shading) to the center unit. The excitatory radius is 1 and the inhibitory radius vector remains the same; lateral weights are normalized to keep the sum of weights constant (Sirosh and Miikkulainen 1994): IllJ,mn - VLmn[llij,mn(t) crinp1]ijmnF' (3) where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i, j), and crinp is the input learning rate; Wij ,kl is the lateral weight (either excitatory Eij ,kl or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and cr is the lateral learning rate (either crexc or crinh). 2.3 Percept ron Output Generation and Weight Adaptation The perceptrons at the output of the system receive the activation pattern on the LISSOM map as their input. The perceptrons are trained after the LISSOM map has been organized. The activation for the perceptron unit Om is where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m is the connection weight between LISSOM map unit (i,j) and output layer unit m. The delta rule is used to train the perceptrons: the weight adaptation is proportional to the map activity and the difference between the output and the target: where crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map unit activity, (m is the target activation for unit m. ((m 1 if the correct digit m, 0 otherwise). Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 739 I Representation I Training Test Table 1: Final Recognition Results. The average recognition percentage and its variance over the 10 different splits are shown for the training and test sets. The differences in each set are statistically significant with p .9999. 3 Experiments A subset of 2992 patterns from the NIST Database 3 was used as training and testing data.1 The patterns were normalized to make sure taht each example had an equal effect on the LISSOM map (Sirosh and Miikkulainen 1994). LISSOM was trained with 2000 patterns. Of these, 1700 were used to train the perceptron layer, and the remaining 300 were used as the validation set to determine when to stop training the perceptrons. The final recognition performance of the whole system was measured on the remaining 992 patterns, which neither LISSOM nor the perceptrons had seen during training. The experiment was repeated 10 times with different random splits of the 2992 input patterns into training, validation, and testing sets. The LISSOM map can be organized starting from initially random weights. How ever, if the input dimensionality is large, as it is in case of the 32 X 32 bitmaps, each unit on the map is activated roughly to the same degree, and it is difficult to bootstrap the self-organizing process (Sirosh and Miikkulainen 1994, 1996). The standard Self-Organizing Map algorithm can be used to preorganize the map in this case. The SOM performs preliminary feature analysis of the input, and forms a coarse topological map of the input space. This map can then be used as the starting point for the LISSOM algorithm, which modifies the topological organi zation and learns lateral connections that decorrelate and represent a more clear categorization of the input patterns. The initial self-organizing map was formed in 8 epochs over the training set, grad ually reducing the neighborhood radius from 20 to 8. The lateral connections were then added to the system, and over another 30 epochs, the afferent and lateral weights of the map were adapted according to equations 3 and 4. In the beginning, the excitation radius was set to 8 and the inhibition radius to 20. The excitation radius was gradually decreased to 1 making the activity patterns more concentrated and causing the units to become more selective to particular types of input pat terns. For comparison, the initial self-organized map was also trained for another 30 epochs, gradually decreasing the neighborhood size to 1 as well. The final afferent weights for the SOM and LISSOM maps are shown in figures 2 and 3. After the SOM and LISSOM maps were organized, a complete set of activation patterns on the two maps were collected. These patterns then formed the training input for the perceptron layer. Two separate versions were each trained for 500 epochs, one with SOM and the other with LISSOM patterns. A third perceptron layer was trained directly with the input bitmaps as well. Recognition performance was measured by counting how often the most highly ac tive perceptron unit was the correct one. The results were averaged over the 10 different splits. On average, the final LISSOMperceptron system correctly recog nized 88.1 of the 992 pattern test sets. This is significantly better than the 84.1 1 Downloadable at ftp:j jsequoyah.ncsl.nist.gov jpubjdatabasesj. 740 Y . CHOE, J. SIROSH, R. MIIKKULAINEN Figure 2: Final Afferent Weights of the SOM map . The digit-like patterns represent the afferent weights of each map unit projected on the input layer. For example, the lower left corner represents the afferent weights of unit (0,0). High weight values are shown in black and low in white. The pattern of weights shows the input pattern to which this unit is most sensitive (6 in this case). There are local clusters sensitive to each digit category. of the SOMperceptron system, and the 72.3 achieved by the perceptron layer alone (Table 1). These results suggest that the internal representations generated by the LISSOM map are more distinct and easier to recognize than the raw input patterns and the representations generated by the SOM map . 4 Discussion The architecture was motivated by the hypothesis that the lateral inhibitory con nections of the LISSOM map would decorrelate and force the map activity patterns to become more distinct. The recognition could then be performed by even the simplest classification architectures, such as the perceptron. Indeed, the LISSOM representations were easier to recognize than the SOM patterns, which lends evi dential support to the hypothesis. In additional experiments, the percept ron output layer was replaced by a two-weight-Iayer backpropagation network and a Hebbian associator net, and trained with the same patterns as the perceptrons. The recog nition results were practically the same for the perceptron, backpropagation, and Hebbian output networks, indicating that the internal representations formed by the LISSOM map are the crucially important part of the recognition system. A comparison of the learning curves reveals two interesting effects (figure 4). First, even though the perceptron net trained with the raw input patterns initially per forms well on the test set, its generalization decreases dramatically during training. This is because the net only learns to memorize the training examples, which does not help much with new noisy patterns. Good internal representations are there fore crucial for generalization. Second , even though initially the settling process of the LISSOM map forms patterns that are significantly easier to recognize than Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 741 Figure 3: Final Afferent Weights of the LISSOM map. The squares identify the above-average inhibitory lateral connections to unit (10,4) (indicated by the thick square). Note that inhibition comes mostly from areas of similar functionality (i.e. areas sensitive to similar input), thereby decorrelating the map activity and forming a sparser representation of the input. the initial, unsettled patterns (formed through the afferent connections only), this difference becomes insignificant later during training. The afferent connections are modified according to the final, settled patterns, and gradually learn to anticipate the decorrelated internal representations that the lateral connections form. 5 Conclusion The experiments reported in this paper show that LISSOM forms internal represen tations of the input patterns that are easier to categorize than the raw inputs and the patterns on the SOM map, and suggest that LISSOM can form a useful front end for character recognition systems, and perhaps for other pattern recognition systems as well (such as speech). The main direction of future work is to apply the approach to larger data sets, including the full NIST 3 database, to use a more powerful recognition network instead of the perceptron, and to increase the map size to obtain a richer representation of the input space. Acknowledgements This research was supported in part by National Science Foundation under grant IRI-9309273. Computer time for the simulations was provided by the Pittsburgh Supercomputing Center under grants IRI930005P and IRI940004P, and by a High Performance Computer Time Grant from the University of Texas at Austin. References Allinson, N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of self organising maps. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 6. San Mateo, CA: Morgan Kaufmann. 742 Y. CHOE. J. SIROSH. R. MIIKKULAINEN Comparison:Test 'SettIEiCLlSSOU' - Epochs Figure 4: Comparison of the learning curves, A perceptron network was trained to recognize four different kinds of internal representations: the settled LISSOM patterns, the LISSOM patterns before settling, the patterns on the final SOM network, and raw input bitmaps. The recognition accuracy on the test set was then measured and averaged over 10 simulations. The generalization of the raw input perceptron system decreases rapidly as the net learns to memorize the training patterns. The difference of using settled and unsettled LISSOM patterns diminishes as the afferent weights of LISSOM learn to take into account the decorrelation performed by the lateral weights. Denker, J. S., Gardner, W. R., Graf, H. P., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon, I. (1989). Neural network recognizer for hand-written zip code digits. In Touretzky, D . S., editor, Advances in Neural Information Processing Systems 1. San Mateo, CA: Morgan Kaufmann . Fukushima, K., and Wake, N. (1990). Alphanumeric character recognition by neocognitron. In Advanced Neural Computers, 263-270. Elsevier Science Pub lishers B.V . (North-Holland). Ie Cun, Y., Boser, B ., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990). Handwritten digit recognition with a back propagation network. In Touretzky, D. S., editor, Advances in Neural Infor mation Processing Systems 2. San Mateo, CA: Morgan Kaufmann . Martin, G. L ., and Pittman, J. A. (1990). Recognizing hand-printed letters and digits. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2. San Mateo, CA: Morgan Kaufmann. Sirosh, J., and Miikkulainen, R. (1994). Cooperative self-organization of afferent and lateral connections in cortical maps . Biological Cybernetics, 71:66-78. Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance and patterned lateral connections in a self-organizing model of the primary visual cortex. In Tesauro, G ., Touretzky, D. S., and Leen, T . K., editors, Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Sirosh, J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned lateral interaction in a self-organizing model of the primary visual cortex. Neu ral Computation (in press).",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@10 | 0.9466 |
| cosine_precision@10 | 0.0947 |
| cosine_recall@10 | 0.9466 |
| cosine_ndcg@5 | 0.8507 |
| **cosine_ndcg@10** | **0.8603** |
| cosine_mrr@10 | 0.8323 |
| cosine_map@10 | 0.8323 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 14,255 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 13.4 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 508.46 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Proposed architecture for time-based pattern recognition in speech, motion, and signatures</code> | <code>INTRODUCTION Recent interest in connectionist, or "neural" networks has emphasized their ability to store, retrieve and process patterns1,2. For most applications, the patterns to be processed are static in the sense that they lack temporal context. Another important class consists of those problems that require the processing of temporal patterns. In these the information to be learned or processed is not a particular pattern but a sequence of patterns. Such problems include speech processing, signature verification, motion detection, and predictive signal processin,r-8. More precisely, temporal pattern processing means that the desired output depends not only on the current input but also on those preceding or following it as well. This implies that two identical inputs at different time steps might yield different desired outputs depending on what patterns precede or follow them . There is another feature characteristic of much temporal pattern processing. Here an entire sequence of...</code> |
| <code>Design approach for stabilizing analog VLSI neural systems</code> | <code>INTRODUCTION The term "lateral inhibition" first arose in neurophysiology to describe a common form of neural circuitry in which the output of each neuron in some population is used to inhibit the response of each of its neighbors. Perhaps the best understood example is the horizontal cell layer in the vertebrate retina, in which lateral inhibition simultaneously enhances intensity edges and acts as an automatic lain control to extend the dynamic range of the retina as a whole. The principle has been used in the design of artificial neural system algorithms by Kohonen 2 and others and in the electronic design of neural chips by Carver Mead et. al.3 ,4. In the VLSI implementation of neural systems, it is convenient to build lateral inhibition networks by using a locally connected on-chip resistive grid. Linear resistors fabricated in, e.g., polysilicon, yield a very compact realization, and nonlinear resistive grids, made from MOS transistors, have been found useful for image segmentati...</code> |
| <code>Neural network classifier using coding theory for improved classification capacity</code> | <code>INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N (2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method. Figure 1 (a) Classification problems versus (b) Error control decoding problems In this paper we are spe...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 500
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.01
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 500
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | cosine_ndcg@10 |
|:------:|:----:|:-------------:|:--------------:|
| 0.0893 | 10 | 0.5247 | 0.8247 |
| 0.1786 | 20 | 0.2625 | 0.8446 |
| 0.2679 | 30 | 0.2159 | 0.8485 |
| 0.3571 | 40 | 0.1849 | 0.8487 |
| 0.4464 | 50 | 0.2149 | 0.8506 |
| 0.5357 | 60 | 0.1538 | 0.8534 |
| 0.625 | 70 | 0.1617 | 0.8547 |
| 0.7143 | 80 | 0.1463 | 0.8575 |
| 0.8036 | 90 | 0.1626 | 0.8592 |
| 0.8929 | 100 | 0.1334 | 0.8598 |
| 0.9821 | 110 | 0.168 | 0.8603 |
### Framework Versions
- Python: 3.12.9
- Sentence Transformers: 3.4.1
- Transformers: 4.50.0
- PyTorch: 2.5.1
- Accelerate: 1.5.2
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
offfragnor123/NEERA2.0
|
offfragnor123
| 2025-06-19T09:52:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-18T14:37:47Z |
---
license: creativeml-openrail-m
---
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_24_1_49
|
winnieyangwannan
| 2025-06-19T09:51:31Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_26_1_49
|
winnieyangwannan
| 2025-06-19T09:50:33Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:37:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MahiH/dialogpt-finetuned-chatbot
|
MahiH
| 2025-06-19T09:48:14Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-12T16:02:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/splade-cocondenser-ensembledistil-sts
|
tomaarsen
| 2025-06-19T09:46:42Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:5749",
"loss:SpladeLoss",
"loss:SparseCosineSimilarityLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/stsb",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:2004.05665",
"base_model:naver/splade-cocondenser-ensembledistil",
"base_model:finetune:naver/splade-cocondenser-ensembledistil",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T09:46:30Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:5749
- loss:SpladeLoss
- loss:SparseCosineSimilarityLoss
- loss:FlopsLoss
base_model: naver/splade-cocondenser-ensembledistil
widget:
- text: There is no 'still' that is not relative to some other object.
- text: A woman is adding oil on fishes.
- text: Minimum wage laws hurt the least skilled, least productive the most.
- text: Although I believe Searle is mistaken, I don't think you have found the problem.
- text: A man plays the guitar.
datasets:
- sentence-transformers/stsb
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- active_dims
- sparsity_ratio
co2_eq_emissions:
emissions: 1.9041862820434683
energy_consumed: 0.00489883325026233
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.022
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-cocondenser-ensembledistil trained on STS
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.886667090425421
name: Pearson Cosine
- type: spearman_cosine
value: 0.8833211393853202
name: Spearman Cosine
- type: active_dims
value: 49.54833221435547
name: Active Dims
- type: sparsity_ratio
value: 0.9983766354690271
name: Sparsity Ratio
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8538687321467977
name: Pearson Cosine
- type: spearman_cosine
value: 0.8473643880811903
name: Spearman Cosine
- type: active_dims
value: 53.73966598510742
name: Active Dims
- type: sparsity_ratio
value: 0.9982393137413961
name: Sparsity Ratio
---
# splade-cocondenser-ensembledistil trained on STS
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) on the [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) <!-- at revision 25178a62708a3ab1b5c4b5eb30764d65bfddcfbb -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-ensembledistil-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man plays the guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4319, 0.0172],
# [0.4319, 1.0000, 0.0134],
# [0.0172, 0.0134, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>SparseEmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseEmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:-----------|:-----------|
| pearson_cosine | 0.8867 | 0.8539 |
| **spearman_cosine** | **0.8833** | **0.8474** |
| active_dims | 49.5483 | 53.7397 |
| sparsity_ratio | 0.9984 | 0.9982 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseCosineSimilarityLoss(loss_fct='torch.nn.modules.loss.MSELoss')",
"lambda_corpus": 0.003
}
```
### Evaluation Dataset
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseCosineSimilarityLoss(loss_fct='torch.nn.modules.loss.MSELoss')",
"lambda_corpus": 0.003
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.8366 | - |
| 0.2778 | 100 | 0.0268 | 0.0243 | 0.8774 | - |
| 0.5556 | 200 | 0.0264 | 0.0252 | 0.8720 | - |
| 0.8333 | 300 | 0.0256 | 0.0226 | 0.8833 | - |
| -1 | -1 | - | - | - | 0.8474 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.005 kWh
- **Carbon Emitted**: 0.002 kg of CO2
- **Hours Used**: 0.022 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
DevQuasar/NousResearch.DeepHermes-3-Llama-3-8B-Preview-GGUF
|
DevQuasar
| 2025-06-19T09:42:51Z | 145 | 3 | null |
[
"gguf",
"text-generation",
"base_model:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"base_model:quantized:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-02-13T21:52:41Z |
---
base_model:
- NousResearch/DeepHermes-3-Llama-3-8B-Preview
pipeline_tag: text-generation
license: llama3
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [NousResearch/DeepHermes-3-Llama-3-8B-Preview](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview)
License: please refer the base model's license!
According to the original description
### Deep Thinking Mode - Deep Hermes Preview can activate long chain of thought with a system prompt.
```
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
```
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
07-Official-mezzo-fun-18-Viral-videos/wATCH.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
07-Official-mezzo-fun-18-Viral-videos
| 2025-06-19T09:40:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:35:31Z |
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
John6666/otaku-mixillustrious-xl-v3-apex-ai-v10-sdxl
|
John6666
| 2025-06-19T09:39:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:33:18Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693483/otaku-mix-illustrious-xl-v3-apexai?modelVersionId=1916558).
This model created by [ApexThunder_Ai](https://civitai.com/user/ApexThunder_Ai).
|
yezg/qwen2.5-sqlbot-tools-gguf
|
yezg
| 2025-06-19T09:33:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:33:11Z |
---
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yezg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zhisong111/1ssss
|
zhisong111
| 2025-06-19T09:33:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:33:15Z |
---
license: apache-2.0
---
|
steven567/q-FrozenLake-v1-4x4-noSlippery
|
steven567
| 2025-06-19T09:33:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:33:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="steven567/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_8_1_49
|
winnieyangwannan
| 2025-06-19T09:33:09Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:23:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/natural-noob-xl-v-pred-anime-furry-experiment-v30-sdxl
|
John6666
| 2025-06-19T09:33:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"furry",
"illustration",
"vivid colors",
"accuracy",
"detail",
"creativity",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:27:07Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- furry
- illustration
- vivid colors
- accuracy
- detail
- creativity
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1641988/natural-noob-xl-v-pred-anime-and-furry-experiment?modelVersionId=1917776).
This model created by [DarkFawkes](https://civitai.com/user/DarkFawkes).
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_20_1_49
|
winnieyangwannan
| 2025-06-19T09:32:19Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:20:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
senga-ml/dnote-body-auto-lr
|
senga-ml
| 2025-06-19T09:30:57Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T10:44:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wengti0608/a2c-PandaPickAndPlace-v3
|
wengti0608
| 2025-06-19T09:30:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:19:36Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -41.10 +/- 17.97
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Achalkamble/codeparrot_model
|
Achalkamble
| 2025-06-19T09:26:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:27:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CSLin3303/qwen3-laws_loramodel_20250619
|
CSLin3303
| 2025-06-19T09:23:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:22:57Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CSLin3303
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nnilayy/deap-arousal-multi-classification-Kfold-4
|
nnilayy
| 2025-06-19T09:22:52Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T09:21:50Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
koboldcpp/imgmodel
|
koboldcpp
| 2025-06-19T09:16:36Z | 6,631 | 6 | null |
[
"gguf",
"region:us"
] | null | 2024-03-07T06:02:36Z |
This repo contains a few simple image generation models. You can also load any Stable Diffusion 1.5, SDXL, SD3 or Flux model into KoboldCpp.
For SD1.5 and SDXL, you only need the basic model (and VAE fix if no baked VAE). For SD3 and Flux you will also need a Clip and T5-XXL model (not included here)
You can use `--sdmodel` to load the base model, and check `--help` for other SD settings.
They are all also available in the image tab in the launcher

|
Adarsh203/Llama-3.2-3B-Instruct_cot_lora_model_
|
Adarsh203
| 2025-06-19T09:16:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:15:38Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Adarsh203
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Varinder2110/rachitnew
|
Varinder2110
| 2025-06-19T09:15:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:09:11Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Rachitnew
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/rachitnew/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/rachitnew', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/rachitnew/discussions) to add images that show off what you’ve made with this LoRA.
|
sungkwan2/my_awesome_opus_books_model
|
sungkwan2
| 2025-06-19T09:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-tc-big-en-ko",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T08:59:37Z |
---
library_name: transformers
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-tc-big-en-ko
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4960
- Bleu: 0.007
- Gen Len: 213.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 4.7441 | 1.0 | 50 | 4.4915 | 0.0069 | 212.985 |
| 4.2174 | 2.0 | 100 | 4.4960 | 0.007 | 213.19 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed28-2025-06-19
|
morturr
| 2025-06-19T09:14:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T02:16:36Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
rekhtalabs/ur-2-hi-translit
|
rekhtalabs
| 2025-06-19T09:10:39Z | 0 | 2 | null |
[
"custom-transliterator",
"pytorch",
"transliterations",
"urdu",
"hindi",
"RekhtaLabs",
"Sequence2Sequence",
"Transformers",
"ur",
"hi",
"license:other",
"region:us"
] | null | 2025-06-18T12:06:45Z |
---
license: other
language:
- ur
- hi
tags:
- pytorch
- transliterations
- urdu
- hindi
- RekhtaLabs
- Sequence2Sequence
- Transformers
---

# Urdu to Hindi Transliteration Model (Character-Level)
This is a lightweight Transformer-based model trained for **character-level transliteration** of **Urdu poetry into Hindi script**. The model is specially tuned for literary and poetic text, making it ideal for applications involving shayari, nazm, or ghazals.
# Live Inference
https://rekhtalabs.org/demo/transliterate
## Model Overview
| Feature | Value |
|-------------------------|----------------------------|
| **Architecture** | Transformer (BART-style) |
| **Tokenizer** | Character-level |
| **Total Parameters** | 4M |
| **Source Vocab Size** | 87 (Urdu characters) |
| **Target Vocab Size** | 109 (Hindi characters) |
| **Embedding Size** | 256 |
| **Hidden Size** | 256 (`d_model`) |
| **Feedforward Size** | 512 |
| **Encoder Layers** | 3 |
| **Decoder Layers** | 3 |
| **Attention Heads** | 4 |
| **Max Sequence Length** | 128 characters |
---
## Usage
```python
from huggingface_hub import snapshot_download
path = snapshot_download(
repo_id="rekhtalabs/ur-2-hi-translit",
local_dir="./ur-2-hi-translit",
local_dir_use_symlinks=False
)
cd ur-2-hi-translit
```
```python
pip install -r requirements.txt
```
```python
import torch
import sentencepiece as spm
from torch import nn
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_len=5000):
super().__init__()
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(position.float() * div_term)
pe[:, 1::2] = torch.cos(position.float() * div_term)
self.pe = pe.unsqueeze(0)
def forward(self, x):
return x + self.pe[:, :x.size(1)].to(x.device)
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, d_model=256, nhead=4, num_layers=3, dim_feedforward=512, max_len=128):
super().__init__()
self.src_tok_emb = nn.Embedding(src_vocab_size, d_model)
self.tgt_tok_emb = nn.Embedding(tgt_vocab_size, d_model)
self.pos_encoder = PositionalEncoding(d_model, max_len)
self.transformer = nn.Transformer(
d_model=d_model,
nhead=nhead,
num_encoder_layers=num_layers,
num_decoder_layers=num_layers,
dim_feedforward=dim_feedforward,
batch_first=True
)
self.out = nn.Linear(d_model, tgt_vocab_size)
def forward(self, src, tgt):
src = self.pos_encoder(self.src_tok_emb(src))
tgt = self.pos_encoder(self.tgt_tok_emb(tgt))
tgt_input = tgt
tgt_mask = nn.Transformer.generate_square_subsequent_mask(tgt_input.size(1)).to(src.device)
out = self.transformer(src, tgt_input, tgt_mask=tgt_mask)
return self.out(out)
device = torch.device("cpu")
sp_nastaaliq = spm.SentencePieceProcessor(model_file='nastaaliq_char.model')
sp_devanagari = spm.SentencePieceProcessor(model_file='devanagari_char.model')
model = Transformer(
src_vocab_size=sp_nastaaliq.get_piece_size(),
tgt_vocab_size=sp_devanagari.get_piece_size()
)
checkpoint = torch.load("transformer_transliteration_final.pt", map_location=device)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
model.to(device)
def transliterate_urdu_to_hindi(text_urdu, max_len=128):
src_ids = [2] + sp_nastaaliq.encode(text_urdu)[:max_len - 2] + [3]
src_tensor = torch.tensor(src_ids).unsqueeze(0).to(device) # shape: (1, seq_len)
tgt_ids = [2]
tgt_tensor = torch.tensor(tgt_ids).unsqueeze(0).to(device)
for _ in range(max_len):
output = model(src_tensor, tgt_tensor)
next_token_logits = output[0, -1, :]
next_token_id = torch.argmax(next_token_logits).item()
if next_token_id == 3:
break
tgt_ids.append(next_token_id)
tgt_tensor = torch.tensor(tgt_ids).unsqueeze(0).to(device)
return sp_devanagari.decode(tgt_ids[1:])
res=transliterate_urdu_to_hindi("وسوسے دل میں نہ رکھ خوف رسن لے کے نہ چل")
print(res)
```
## Output
```python
वसवसे दिल में न रख ख़ौफ़-ए-रसन ले के न चल
```
---
## Dataset
- Trained on approximately **800,000 Urdu-Hindi sentence pairs**
- Sourced and curated for transliteration.
- Character-level alignment ensured for quality
---
|
nnilayy/deap-valence-multi-classification-Kfold-4
|
nnilayy
| 2025-06-19T09:09:55Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T09:09:51Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
wbasharat/llama3_3b_freeze_instructionTuning
|
wbasharat
| 2025-06-19T09:08:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:02:24Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kortix/FastApply-1.5B-v1.0
|
Kortix
| 2025-06-19T09:07:52Z | 2,101 | 34 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"fast-apply",
"instant-apply",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-18T11:55:22Z |
---
base_model: unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- fast-apply
- instant-apply
---
# FastApply-1.5B-v1.0
*🚀 Update May 2025:* For production-grade throughput, we use *[Morph](https://morphllm.com)* (the hosted Fast Apply API powering [SoftGen AI](https://softgen.ai/)).
- Morph hits *~4,500 tok/s* even on huge token diffs
- Larger model trained on millions of examples and tuned for accuracy.
> Stable inference, large free tier, highly recommended if you need serious speed in prod.
[Github: kortix-ai/fast-apply](https://github.com/kortix-ai/fast-apply)
[Dataset: Kortix/FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0)
[Try it now on 👉 Google Colab](https://colab.research.google.com/drive/1BNCab4oK-xBqwFQD4kCcjKc7BPKivkm1?usp=sharing)
## Model Details
### Basic Information
- **Developed by:** Kortix
- **License:** apache-2.0
- **Finetuned from model:** [unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit)
### Model Description
FastApply-1.5B-v1.0 is a 1.5B model designed for instant code application, producing full file edits to power [SoftGen AI](https://softgen.ai/).
It is part of the Fast Apply pipeline for data generation and fine-tuning Qwen2.5 Coder models.
The model achieves high throughput when deployed on fast providers like Fireworks while maintaining high edit accuracy, with a speed of approximately 340 tokens/second.
## Intended Use
FastApply-1.5B-v1.0 is intended for use in AI-powered code editors and tools that require fast, accurate code modifications. It is particularly well-suited for:
- Instant code application tasks
- Full file edits
- Integration with AI-powered code editors like Aider and PearAI
- Local tools to reduce the cost of frontier model output
## Inference template
FastApply-1.5B-v1.0 is based on the Qwen2.5 Coder architecture and is fine-tuned for code editing tasks. It uses a specific prompt structure for inference:
```
<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
```
The model's output is structured as:
```
<updated-code>[Full-complete updated file]</updated-code>
```
## Additional Information
For more details on the Fast Apply pipeline, data generation process, and deployment instructions, please refer to the [GitHub repository](https://github.com/kortix-ai/fast-apply).
## How to Use
To use the model, you can load it using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Kortix/FastApply-1.5B-v1.0", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Kortix/FastApply-1.5B-v1.0")
# Prepare your input following the prompt structure mentioned above
input_text = """<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
"""
input_text = input_text.format(
original_code=original_code,
update_snippet=update_snippet,
).strip()
# Generate the response
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=8192,)
response = tokenizer.decode(output[0][len(input_ids[0]):])
print(response)
# Extract the updated code from the response
updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0]
```
## Evaluation:

|
lefantom00/Llama-3-8B-it-262k-iSMART
|
lefantom00
| 2025-06-19T09:05:21Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:gradientai/Llama-3-8B-Instruct-262k",
"base_model:finetune:gradientai/Llama-3-8B-Instruct-262k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:51:16Z |
---
base_model: gradientai/Llama-3-8B-Instruct-262k
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lefantom00
- **License:** apache-2.0
- **Finetuned from model :** gradientai/Llama-3-8B-Instruct-262k
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
VIDEOS-18-Katrina-Lim-Virals-Kiffy-Videos/FULL.VIDEO.Katrina.Lim.Viral.Video.Tutorial.Official
|
VIDEOS-18-Katrina-Lim-Virals-Kiffy-Videos
| 2025-06-19T09:05:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:04:41Z |
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3
|
ArtusDev
| 2025-06-19T09:04:51Z | 0 | 0 | null |
[
"llama-3.3",
"finetune",
"roleplay",
"chat",
"wings-of-fire",
"exl3",
"dataset:Darkhn/WOF_QA_V2",
"dataset:Darkhn/WOF_Pretraining",
"dataset:Darkhn/WOF_V3_Combined_Dataset",
"base_model:Darkhn/L3.3-70B-Animus-V2",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V2",
"license:llama3.3",
"region:us"
] | null | 2025-06-19T03:38:18Z |
---
base_model: Darkhn/L3.3-70B-Animus-V2
base_model_relation: quantized
quantized_by: ArtusDev
license: llama3.3
tags:
- llama-3.3
- finetune
- roleplay
- chat
- wings-of-fire
- exl3
datasets:
- Darkhn/WOF_QA_V2
- Darkhn/WOF_Pretraining
- Darkhn/WOF_V3_Combined_Dataset
---
## EXL3 Quants of Darkhn/L3.3-70B-Animus-V2
EXL3 quants of [Darkhn/L3.3-70B-Animus-V2](https://huggingface.co/Darkhn/L3.3-70B-Animus-V2) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.25_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/4.25bpw_H6) | 4.25 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
Alphatao/Affine-9801198
|
Alphatao
| 2025-06-19T09:04:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:58:38Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_12_1_49
|
winnieyangwannan
| 2025-06-19T09:03:24Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:19:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet_v1
|
seroe
| 2025-06-19T09:03:22Z | 386 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"cross-encoder",
"generated_from_trainer",
"dataset_size:89964",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"custom_code",
"tr",
"dataset:seroe/vodex-turkish-reranker-triplets",
"arxiv:1908.10084",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:finetune:jinaai/jina-reranker-v2-base-multilingual",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-05-13T16:37:10Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:89964
- loss:CachedMultipleNegativesRankingLoss
base_model: jinaai/jina-reranker-v2-base-multilingual
datasets:
- seroe/vodex-turkish-reranker-triplets
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: jinaai/jina-reranker-v2-base-multilingual
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.6456
name: Map
- type: mrr@10
value: 0.6516
name: Mrr@10
- type: ndcg@10
value: 0.7332
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.6395
name: Map
- type: mrr@10
value: 0.6463
name: Mrr@10
- type: ndcg@10
value: 0.729
name: Ndcg@10
---
# jinaai/jina-reranker-v2-base-multilingual
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) on the [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) <!-- at revision eed787badf7784e1a25c0eaa428627c8cbef511e -->
- **Maximum Sequence Length:** 1024 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet_v1")
# Get scores for pairs of texts
pairs = [
['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'],
['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'],
["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."],
['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'],
['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?',
[
'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.',
'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.',
"Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.",
'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.',
'Bu ek paketi almak için hangi kanalları kullanabilirim?',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.6456 (+0.0321) | 0.6395 (+0.0140) |
| mrr@10 | 0.6516 (+0.0380) | 0.6463 (+0.0208) |
| **ndcg@10** | **0.7332 (+0.1185)** | **0.7290 (+0.1018)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-reranker-triplets
* Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0)
* Size: 89,964 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> |
| <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> |
| <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `warmup_ratio`: 0.2
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:----------:|:-------:|:-------------:|:--------------------:|:--------------------:|
| 0.5682 | 100 | 0.8068 | 0.7337 (+0.1191) | 0.7303 (+0.1031) |
| 1.1307 | 200 | 0.7885 | 0.7335 (+0.1189) | 0.7303 (+0.1032) |
| 1.6989 | 300 | 0.7881 | 0.7333 (+0.1187) | 0.7294 (+0.1022) |
| 2.2614 | 400 | 0.7881 | 0.7335 (+0.1189) | 0.7298 (+0.1027) |
| **2.8295** | **500** | **0.7851** | **0.7332 (+0.1185)** | **0.7290 (+0.1018)** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
John6666/illusoriax-v10-sdxl
|
John6666
| 2025-06-19T09:02:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"anime style",
"glossy texture",
"sexy female",
"dynamic pose",
"detailed anatomy",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T08:56:45Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- anime style
- glossy texture
- sexy female
- dynamic pose
- detailed anatomy
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1672527/illusoriax?modelVersionId=1893071).
This model created by [Neural_Lens](https://civitai.com/user/Neural_Lens).
|
seroe/Qwen3-Embedding-0.6B-turkish-triplet-matryoshka
|
seroe
| 2025-06-19T09:02:32Z | 31 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:70941",
"loss:MatryoshkaLoss",
"loss:CachedMultipleNegativesRankingLoss",
"tr",
"dataset:seroe/vodex-turkish-triplets",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:2101.06983",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-10T09:13:03Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:70941
- loss:MatryoshkaLoss
- loss:CachedMultipleNegativesRankingLoss
base_model: Qwen/Qwen3-Embedding-0.6B
widget:
- source_sentence: Bağımsız akıllı cihaz kampanyalarının detayları nelerdir?
sentences:
- Vodafone'un kampanyalarına katılan aboneler, seçtikleri tarifeye göre belirli
indirimlerden yararlanabilirler. Örneğin, Cep Avantaj tarifeleri üzerinden 10
TL ile 20 TL arasında indirim sağlanmaktadır.
- Kampanyalar, farklı cihaz modelleri için aylık ödeme planları sunmaktadır.
- Vodafone'un kampanyaları, sadece internet paketleri ile ilgilidir.
- source_sentence: İnternet hattımı nasıl iptal ettirebilirim?
sentences:
- Vodafone'da, müşterinin taşımak istediği numara yerine yanlışlıkla başka bir numaranın
taşındığı durumlar, hatalı taşıma sürecini kapsamaktadır.
- İnternet hattınızı iptal etmek için sadece online form doldurmanız yeterlidir.
- İptal işlemi için müşteri hizmetlerini arayarak talepte bulunmanız ve iptal dilekçesini
göndermeniz gerekmektedir.
- source_sentence: Vodafone kampanyalarında veri kullanımı ve cezai şartlar
sentences:
- Yurtdışında geçerli olan tarifeler, yalnızca kurumsal müşterilere yöneliktir.
- Vodafone kampanyaları, kullanıcıların istedikleri kadar veri kullanmalarına izin
verir ve cezai şartlar uygulanmaz.
- Vodafone'un kampanyalarında, kullanıcıların veri paketleri kullanımı belirli limitler
dahilinde gerçekleşir ve kampanyadan yararlanma koşulları vardır.
- source_sentence: Alcatel One Touch POP 7 Tablet'in işletim sistemi nedir?
sentences:
- Yabancılar için sunulan Limitsiz Fiber Kampanyası, belirli hızlarda internet paketleri
sunmaktadır ve katılım için yabancı uyruklu olma şartı aranmaktadır.
- Alcatel One Touch POP 7 Tablet, iOS işletim sistemi ile çalışan bir cihazdır.
- Alcatel One Touch POP 7 Tablet, Android 4.2 işletim sistemi ile çalışmaktadır.
- source_sentence: Vodafone Net'in internet hız garantisi var mı?
sentences:
- Ek data paketlerinin geçerlilik süreleri genellikle 30 gün olarak belirlenmiştir,
ancak bazı paketler 7 gün geçerlilik süresine sahiptir.
- Vodafone Net, tüm abonelerine en az 100 Mbps hız garantisi vermektedir.
- Vodafone Net, internet hızını garanti etmemekte, bu hız abonenin hattının uygunluğuna
ve santrale olan mesafeye bağlı olarak değişiklik göstermektedir.
datasets:
- seroe/vodex-turkish-triplets
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: Qwen3-Embedding-0.6B Türkçe Triplet Matryoshka
results:
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 1024d
type: tr-triplet-dev-1024d
metrics:
- type: cosine_accuracy
value: 0.9672672152519226
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9776706695556641
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 768d
type: tr-triplet-dev-768d
metrics:
- type: cosine_accuracy
value: 0.9690433740615845
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9776706695556641
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 512d
type: tr-triplet-dev-512d
metrics:
- type: cosine_accuracy
value: 0.9718345403671265
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9781781435012817
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: tr triplet dev 256d
type: tr-triplet-dev-256d
metrics:
- type: cosine_accuracy
value: 0.9687896370887756
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9771631360054016
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 1024d
type: all-nli-test-1024d
metrics:
- type: cosine_accuracy
value: 0.9764078855514526
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 768d
type: all-nli-test-768d
metrics:
- type: cosine_accuracy
value: 0.9759005308151245
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 512d
type: all-nli-test-512d
metrics:
- type: cosine_accuracy
value: 0.9748858213424683
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test 256d
type: all-nli-test-256d
metrics:
- type: cosine_accuracy
value: 0.9756468534469604
name: Cosine Accuracy
---
# Qwen3-Embedding-0.6B Türkçe Triplet Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision 744169034862c8eec56628663995004342e4e449 -->
- **Maximum Sequence Length:** 32768 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen3Model
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seroe/Qwen3-Embedding-0.6B-turkish-triplet-matryoshka")
# Run inference
sentences = [
"Vodafone Net'in internet hız garantisi var mı?",
'Vodafone Net, internet hızını garanti etmemekte, bu hız abonenin hattının uygunluğuna ve santrale olan mesafeye bağlı olarak değişiklik göstermektedir.',
'Vodafone Net, tüm abonelerine en az 100 Mbps hız garantisi vermektedir.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `tr-triplet-dev-1024d` and `all-nli-test-1024d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | tr-triplet-dev-1024d | all-nli-test-1024d |
|:--------------------|:---------------------|:-------------------|
| **cosine_accuracy** | **0.9673** | **0.9764** |
#### Triplet
* Datasets: `tr-triplet-dev-768d` and `all-nli-test-768d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | tr-triplet-dev-768d | all-nli-test-768d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.969** | **0.9759** |
#### Triplet
* Datasets: `tr-triplet-dev-512d` and `all-nli-test-512d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | tr-triplet-dev-512d | all-nli-test-512d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.9718** | **0.9749** |
#### Triplet
* Datasets: `tr-triplet-dev-256d` and `all-nli-test-256d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | tr-triplet-dev-256d | all-nli-test-256d |
|:--------------------|:--------------------|:------------------|
| **cosine_accuracy** | **0.9688** | **0.9756** |
#### Triplet
* Dataset: `tr-triplet-dev-1024d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9777** |
#### Triplet
* Dataset: `tr-triplet-dev-768d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9777** |
#### Triplet
* Dataset: `tr-triplet-dev-512d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9782** |
#### Triplet
* Dataset: `tr-triplet-dev-256d`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9772** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-triplets
* Dataset: [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) at [0c9fab0](https://huggingface.co/datasets/seroe/vodex-turkish-triplets/tree/0c9fab08a042b11b30064b5adc205f626c8a6add)
* Size: 70,941 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.91 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 38.38 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 29.89 tokens</li><li>max: 59 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Kampanya tarihleri ve katılım şartları</code> | <code>Kampanya, 11 Ekim 2018'de başlayıp 29 Ekim 2018'de sona erecek. Katılımcıların belirli bilgileri doldurması ve Vodafone Müzik pass veya Video pass sahibi olmaları gerekiyor.</code> | <code>Kampanya, sadece İstanbul'daki kullanıcılar için geçerli olup, diğer şehirlerden katılım mümkün değildir.</code> |
| <code>Taahhüt süresi dolmadan başka bir kampanyaya geçiş yapılırsa ne olur?</code> | <code>Eğer abone taahhüt süresi dolmadan başka bir kampanyaya geçerse, bu durumda önceki kampanya süresince sağlanan indirimler ve diğer faydalar, iptal tarihinden sonraki fatura ile tahsil edilecektir.</code> | <code>Aboneler, taahhüt süresi dolmadan başka bir kampanyaya geçtiklerinde, yeni kampanyadan faydalanmak için ek bir ücret ödemek zorundadırlar.</code> |
| <code>FreeZone üyeliğimi nasıl sorgulayabilirim?</code> | <code>Üyeliğinizi sorgulamak için FREEZONESORGU yazarak 1525'e SMS gönderebilirsiniz.</code> | <code>Üyeliğinizi sorgulamak için Vodafone mağazasına gitmeniz gerekmektedir.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### vodex-turkish-triplets
* Dataset: [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) at [0c9fab0](https://huggingface.co/datasets/seroe/vodex-turkish-triplets/tree/0c9fab08a042b11b30064b5adc205f626c8a6add)
* Size: 3,941 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.68 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 38.84 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 30.09 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:-----------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|
| <code>Vodafone Net'e geçiş yaparken bağlantı ücreti var mı?</code> | <code>Vodafone Net'e geçişte 264 TL bağlantı ücreti bulunmaktadır ve bu ücret 24 ay boyunca aylık 11 TL olarak faturalandırılmaktadır.</code> | <code>Vodafone Net'e geçişte bağlantı ücreti yoktur ve tüm işlemler ücretsizdir.</code> |
| <code>Bağımsız akıllı cihaz kampanyalarının detayları nelerdir?</code> | <code>Kampanyalar, farklı cihaz modelleri için aylık ödeme planları sunmaktadır.</code> | <code>Vodafone'un kampanyaları, sadece internet paketleri ile ilgilidir.</code> |
| <code>Fibermax hizmeti iptal edilirse ne gibi sonuçlar doğar?</code> | <code>İptal işlemi taahhüt süresi bitmeden yapılırsa, indirimler ve ücretsiz hizmet bedelleri ödenmelidir.</code> | <code>Fibermax hizmeti iptal edildiğinde, kullanıcıdan hiçbir ücret talep edilmez.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CachedMultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 256
- `weight_decay`: 0.01
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.05
- `save_only_model`: True
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: True
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | tr-triplet-dev-1024d_cosine_accuracy | tr-triplet-dev-768d_cosine_accuracy | tr-triplet-dev-512d_cosine_accuracy | tr-triplet-dev-256d_cosine_accuracy | all-nli-test-1024d_cosine_accuracy | all-nli-test-768d_cosine_accuracy | all-nli-test-512d_cosine_accuracy | all-nli-test-256d_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:------------------------------------:|:-----------------------------------:|:-----------------------------------:|:-----------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|
| 0.3429 | 12 | 10.2876 | 3.2218 | 0.9145 | 0.9211 | 0.9262 | 0.9229 | - | - | - | - |
| 0.6857 | 24 | 6.1342 | 2.5250 | 0.9531 | 0.9561 | 0.9571 | 0.9553 | - | - | - | - |
| 0.3429 | 12 | 4.8969 | 2.3174 | 0.9597 | 0.9632 | 0.9617 | 0.9632 | - | - | - | - |
| 0.6857 | 24 | 4.2031 | 2.0383 | 0.9673 | 0.9690 | 0.9718 | 0.9688 | - | - | - | - |
| 0.3429 | 12 | 3.3893 | 2.3286 | 0.9650 | 0.9655 | 0.9652 | 0.9652 | - | - | - | - |
| 0.6857 | 24 | 3.0878 | 2.1443 | 0.9728 | 0.9739 | 0.9749 | 0.9736 | - | - | - | - |
| 1.0286 | 36 | 3.504 | 1.8128 | 0.9708 | 0.9716 | 0.9716 | 0.9723 | - | - | - | - |
| 1.3714 | 48 | 2.4279 | 1.8915 | 0.9779 | 0.9774 | 0.9782 | 0.9777 | - | - | - | - |
| 1.7143 | 60 | 2.2489 | 1.8638 | 0.9777 | 0.9777 | 0.9782 | 0.9772 | - | - | - | - |
| -1 | -1 | - | - | - | - | - | - | 0.9764 | 0.9759 | 0.9749 | 0.9756 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
AmberYifan/llama3-8b-full-pretrain-mix-mid-tweet-1m-en
|
AmberYifan
| 2025-06-19T09:02:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T06:17:42Z |
---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-8b-full-pretrain-mix-mid-tweet-1m-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-pretrain-mix-mid-tweet-1m-en
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the mix_mid_tweet_1m_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
videos-mezzo-fun-Viral-Video-Original-Link/FULL.VIDEO.mezzo.fun.viral.video.viral.On.Social.Media.Official
|
videos-mezzo-fun-Viral-Video-Original-Link
| 2025-06-19T09:02:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:01:44Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_6_1_49
|
winnieyangwannan
| 2025-06-19T09:01:49Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:19:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seroe/Qwen3-Reranker-0.6B-turkish-triplet
|
seroe
| 2025-06-19T09:01:07Z | 12 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"cross-encoder",
"generated_from_trainer",
"dataset_size:215676",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"tr",
"dataset:seroe/vodex-turkish-triplets-large",
"arxiv:1908.10084",
"base_model:Qwen/Qwen3-Reranker-0.6B",
"base_model:finetune:Qwen/Qwen3-Reranker-0.6B",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-06-16T20:04:30Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:215676
- loss:CachedMultipleNegativesRankingLoss
base_model: Qwen/Qwen3-Reranker-0.6B
datasets:
- seroe/vodex-turkish-triplets-large
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: Qwen/Qwen3-Reranker-0.6B
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.7818
name: Map
- type: mrr@10
value: 0.782
name: Mrr@10
- type: ndcg@10
value: 0.8364
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.7816
name: Map
- type: mrr@10
value: 0.7819
name: Mrr@10
- type: ndcg@10
value: 0.8362
name: Ndcg@10
---
# Qwen/Qwen3-Reranker-0.6B
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Qwen/Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) on the [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [Qwen/Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) <!-- at revision 6e9e69830b95c52b5fd889b7690dda3329508de3 -->
- **Maximum Sequence Length:** 40960 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/Qwen3-Reranker-0.6B-turkish-triplet")
# Get scores for pairs of texts
pairs = [
['Yeni Red Business VIP tarifesi, yüksek veri ve dakika ihtiyaçları olan işletmeler için tasarlanmış bir premium seçenektir.', 'Red Business VIP, işletmelerin yoğun veri ve konuşma ihtiyaçlarını karşılamak için geliştirilmiş bir üst düzey tarifedir.'],
["Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, belirli tarifeler için 12+12 ay taahhüt karşılığında indirimler sunmaktadır ve kampanya iki dönemden oluşmaktadır.", "Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, 12+12 ay taahhüt veren abonelere belirli tarifelerde ilk 12 ay için 20 TL, ikinci 12 ay için 15 TL indirim sağlamaktadır."],
["Vodafone'un Kolay Paketleri, faturasız hat kullanıcılarına TL yükleme gereksinimi olmadan avantajlı paketler sunar ve her ay otomatik yenilenmez.", "Vodafone'un Kolay Paketleri, faturasız hat kullanıcıları için tasarlanmış olup, TL yükleme zorunluluğu olmadan satın alınabilir ve otomatik yenileme yapılmaz."],
["Samsung Galaxy Note 3 cihazı, Vodafone'un Red tarifeleriyle birlikte aylık ek ödeme seçenekleriyle sunulmuş ve kampanya kodlarıyla desteklenmiştir.", 'Vodafone, Samsung Galaxy Note 3 cihazını Red tarifeleriyle birleştirerek, aylık ek ödeme planları ve kampanya kodlarıyla müşterilere sunmuştur.'],
['Red Elite Extra tarifesi, 36 aylık taahhütle 40 TL başlangıç fiyatı ve 165 TL üst fiyat seçeneğiyle sona eren kampanyalar arasında yer almıştır.', "Vodafone'un sona eren kampanyaları arasında yer alan Red Elite Extra tarifesi, 36 aylık taahhütle 40 TL'den başlayıp 165 TL'ye kadar fiyatlandırılmıştır."],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Yeni Red Business VIP tarifesi, yüksek veri ve dakika ihtiyaçları olan işletmeler için tasarlanmış bir premium seçenektir.',
[
'Red Business VIP, işletmelerin yoğun veri ve konuşma ihtiyaçlarını karşılamak için geliştirilmiş bir üst düzey tarifedir.',
"Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, 12+12 ay taahhüt veren abonelere belirli tarifelerde ilk 12 ay için 20 TL, ikinci 12 ay için 15 TL indirim sağlamaktadır.",
"Vodafone'un Kolay Paketleri, faturasız hat kullanıcıları için tasarlanmış olup, TL yükleme zorunluluğu olmadan satın alınabilir ve otomatik yenileme yapılmaz.",
'Vodafone, Samsung Galaxy Note 3 cihazını Red tarifeleriyle birleştirerek, aylık ek ödeme planları ve kampanya kodlarıyla müşterilere sunmuştur.',
"Vodafone'un sona eren kampanyaları arasında yer alan Red Elite Extra tarifesi, 36 aylık taahhütle 40 TL'den başlayıp 165 TL'ye kadar fiyatlandırılmıştır.",
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.7818 (+0.0995) | 0.7816 (+0.0987) |
| mrr@10 | 0.7820 (+0.0998) | 0.7819 (+0.0991) |
| **ndcg@10** | **0.8364 (+0.1539)** | **0.8362 (+0.1533)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-triplets-large
* Dataset: [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) at [1fe9d63](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large/tree/1fe9d63490a69cb96da6b76f4bff1a43c48cbdee)
* Size: 215,676 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 57 characters</li><li>mean: 141.8 characters</li><li>max: 282 characters</li></ul> | <ul><li>min: 61 characters</li><li>mean: 145.94 characters</li><li>max: 325 characters</li></ul> | <ul><li>min: 62 characters</li><li>mean: 119.94 characters</li><li>max: 235 characters</li></ul> |
* Samples:
| query | positive | negative |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Yeni Red Business VIP tarifesi, yüksek veri ve dakika ihtiyaçları olan işletmeler için tasarlanmış bir premium seçenektir.</code> | <code>Red Business VIP, işletmelerin yoğun veri ve konuşma ihtiyaçlarını karşılamak için geliştirilmiş bir üst düzey tarifedir.</code> | <code>Vodafone'un kurumsal tarifeleri, yalnızca küçük işletmelerin düşük veri ihtiyaçlarına odaklanmaktadır.</code> |
| <code>Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, belirli tarifeler için 12+12 ay taahhüt karşılığında indirimler sunmaktadır ve kampanya iki dönemden oluşmaktadır.</code> | <code>Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, 12+12 ay taahhüt veren abonelere belirli tarifelerde ilk 12 ay için 20 TL, ikinci 12 ay için 15 TL indirim sağlamaktadır.</code> | <code>Vodafone'un Yeni Uyumlu Hoşgeldin Kampanyası, yalnızca faturasız hat kullanıcılarına özel olarak tasarlanmış bir kampanyadır ve taahhüt gerektirmez.</code> |
| <code>Vodafone'un Kolay Paketleri, faturasız hat kullanıcılarına TL yükleme gereksinimi olmadan avantajlı paketler sunar ve her ay otomatik yenilenmez.</code> | <code>Vodafone'un Kolay Paketleri, faturasız hat kullanıcıları için tasarlanmış olup, TL yükleme zorunluluğu olmadan satın alınabilir ve otomatik yenileme yapılmaz.</code> | <code>Vodafone'un Kolay Paketleri, faturalı hat kullanıcılarına özel olarak tasarlanmış ve her ay otomatik olarak yenilenen paketlerdir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `num_train_epochs`: 2
- `warmup_ratio`: 0.2
- `save_only_model`: True
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: True
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:------:|:----:|:-------------:|:----------------:|:-----------------:|
| 0.2370 | 100 | 1.192 | 0.7554 (+0.0729) | 0.7552 (+0.0723) |
| 0.4739 | 200 | 0.0214 | 0.7909 (+0.1085) | 0.7892 (+0.1062) |
| 0.7109 | 300 | 0.0066 | 0.8135 (+0.1310) | 0.8115 (+0.1285) |
| 0.9479 | 400 | 0.0048 | 0.8143 (+0.1318) | 0.8141 (+0.1311) |
| 1.1848 | 500 | 0.0034 | 0.8281 (+0.1456) | 0.8270 (+0.1440) |
| 1.4218 | 600 | 0.0028 | 0.8321 (+0.1496) | 0.8319 (+0.1489) |
| 1.6588 | 700 | 0.0027 | 0.8334 (+0.1509) | 0.8333 (+0.1503) |
| 1.8957 | 800 | 0.0026 | 0.8364 (+0.1539) | 0.8362 (+0.1533) |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet-v2
|
seroe
| 2025-06-19T08:59:53Z | 45 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"cross-encoder",
"generated_from_trainer",
"dataset_size:89964",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"custom_code",
"tr",
"dataset:seroe/vodex-turkish-triplets-large",
"arxiv:1908.10084",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:finetune:jinaai/jina-reranker-v2-base-multilingual",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-06-17T07:00:18Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:89964
- loss:CachedMultipleNegativesRankingLoss
base_model: jinaai/jina-reranker-v2-base-multilingual
datasets:
- seroe/vodex-turkish-triplets-large
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: Jina Reranker v2 Base Multilingual fine-tuned on Turkish triplets
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.8007
name: Map
- type: mrr@10
value: 0.8049
name: Mrr@10
- type: ndcg@10
value: 0.8553
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.7958
name: Map
- type: mrr@10
value: 0.8009
name: Mrr@10
- type: ndcg@10
value: 0.8515
name: Ndcg@10
---
# Jina Reranker v2 Base Multilingual fine-tuned on Turkish triplets
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) on the [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) <!-- at revision eed787badf7784e1a25c0eaa428627c8cbef511e -->
- **Maximum Sequence Length:** 1024 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/jina-reranker-v2-base-multilingual-turkish-reranker-triplet-v2")
# Get scores for pairs of texts
pairs = [
['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'],
['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'],
["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."],
['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'],
['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?',
[
'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.',
'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.',
"Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.",
'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.',
'Bu ek paketi almak için hangi kanalları kullanabilirim?',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.8007 (+0.1606) | 0.7958 (+0.1617) |
| mrr@10 | 0.8049 (+0.1648) | 0.8009 (+0.1667) |
| **ndcg@10** | **0.8553 (+0.2144)** | **0.8515 (+0.2168)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-reranker-triplets
* Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0)
* Size: 89,964 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> |
| <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> |
| <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `num_train_epochs`: 2
- `warmup_ratio`: 0.2
- `save_only_model`: True
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.08
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: True
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:------:|:----:|:-------------:|:----------------:|:-----------------:|
| 0.5682 | 100 | 0.8032 | 0.8556 (+0.2147) | 0.8518 (+0.2171) |
| 1.1364 | 200 | 0.7903 | 0.8556 (+0.2147) | 0.8517 (+0.2169) |
| 1.7045 | 300 | 0.7883 | 0.8553 (+0.2144) | 0.8515 (+0.2168) |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
|
morturr
| 2025-06-19T08:57:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T05:38:03Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
John6666/gray-color-25d-model-v10-testing-sdxl
|
John6666
| 2025-06-19T08:56:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T08:50:42Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693405/graycolor-25d-model?modelVersionId=1916475).
This model created by [GrayColor](https://civitai.com/user/GrayColor).
|
Alphatao/Affine-5878053
|
Alphatao
| 2025-06-19T08:56:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:50:32Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.