modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-26 12:28:17
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-26 12:22:02
card
stringlengths
11
1.01M
JayHyeon/pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
JayHyeon
2025-06-19T10:20:16Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:trl-lib/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:EleutherAI/pythia-2.8b", "base_model:finetune:EleutherAI/pythia-2.8b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T03:44:55Z
--- base_model: EleutherAI/pythia-2.8b datasets: trl-lib/ultrafeedback_binarized library_name: transformers model_name: pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/0ecptpbk) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.19.0.dev0 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed42-2025-06-19
morturr
2025-06-19T10:17:57Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T10:17:37Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed42-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed42-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
bobby97/step3_3e09915b-5eab-4da6-89cc-1473ba7dfd3b
bobby97
2025-06-19T10:16:00Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-Fill-dev", "base_model:adapter:black-forest-labs/FLUX.1-Fill-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T08:53:55Z
--- base_model: black-forest-labs/FLUX.1-Fill-dev library_name: diffusers license: other instance_prompt: A black and white image captures faint trails of meteors streaking across the night sky, surrounded by a few discernible stars. The motion of the meteors creates long, luminous lines against the dark backdrop, highlighting their rapid movement through space. widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> Flux Fill based Inpainting model ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
stewy33/0524_original_augmented_original_egregious_bee_speed-ba30fa88
stewy33
2025-06-19T10:15:34Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-06-19T10:13:52Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
nnilayy/deap-valence-multi-classification-Kfold-5
nnilayy
2025-06-19T10:14:53Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-19T10:14:44Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
veddhanth/lora-trained-xl-stage-1-597-fixed
veddhanth
2025-06-19T10:14:45Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-19T09:53:46Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a realistic portrait of sks face widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-597-fixed <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-1-597-fixed LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a realistic portrait of sks face to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-1-597-fixed/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Khruna/devon
Khruna
2025-06-19T10:14:39Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T10:14:12Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_02_11.Still001.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # devon <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/devon/tree/main) them in the Files & versions tab.
dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish
dhanraj2006
2025-06-19T10:12:56Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am dappled flightless jellyfish", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-12T13:15:07Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am dappled flightless jellyfish - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dhanraj2006/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_flightless_jellyfish", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
altinkedi/xxtrgpt2
altinkedi
2025-06-19T10:12:26Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T10:09:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Wiefdw/tax-raft-mistral-7b
Wiefdw
2025-06-19T10:11:39Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-06-19T07:20:00Z
# Tax Document RAFT Model with RAG Support Model ini dilatih menggunakan teknik **RAFT (RAG Fine-Tuning)** dengan dokumen pajak sebagai knowledge base. Dilengkapi dengan **ChromaDB vector database** untuk kemampuan retrieval dokumen pajak. ## Informasi Tambahan: - **Model Base**: unsloth/mistral-7b-instruct-v0.3-bnb-4bit - **Teknik Fine-tuning**: RAFT (RAG Fine-Tuning) - **Embedding Model**: all-MiniLM-L6-v2 - **Vector DB**: ChromaDB - **Framework**: Unsloth + Transformers + LangChain
LandCruiser/sn29C1_1906_5
LandCruiser
2025-06-19T10:11:23Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T02:48:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Khruna/trtt
Khruna
2025-06-19T10:10:38Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T10:09:29Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_02_11.Still001.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # trtt <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/trtt/tree/main) them in the Files & versions tab.
Khruna/Dren
Khruna
2025-06-19T10:07:50Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T10:06:41Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # Dren <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/Dren/tree/main) them in the Files & versions tab.
sgonzalezygil/sd-finetuning-dreambooth-v17-900
sgonzalezygil
2025-06-19T10:06:45Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-19T10:05:32Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Khruna/Denn
Khruna
2025-06-19T10:05:23Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T10:04:10Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # Denn <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/Denn/tree/main) them in the Files & versions tab.
KoichiYasuoka/modernbert-base-classical-chinese-ud-square
KoichiYasuoka
2025-06-19T10:05:04Z
0
0
null
[ "pytorch", "modernbert", "classical chinese", "literary chinese", "ancient chinese", "token-classification", "pos", "dependency-parsing", "lzh", "dataset:universal_dependencies", "base_model:KoichiYasuoka/modernbert-base-classical-chinese", "base_model:finetune:KoichiYasuoka/modernbert-base-classical-chinese", "license:apache-2.0", "region:us" ]
token-classification
2025-06-19T10:03:44Z
--- language: - "lzh" tags: - "classical chinese" - "literary chinese" - "ancient chinese" - "token-classification" - "pos" - "dependency-parsing" base_model: KoichiYasuoka/modernbert-base-classical-chinese datasets: - "universal_dependencies" license: "apache-2.0" pipeline_tag: "token-classification" widget: - text: "孟子見梁惠王" --- # modernbert-base-classical-chinese-ud-square ## Model Description This is a ModernBERT model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [modernbert-base-classical-chinese](https://huggingface.co/KoichiYasuoka/modernbert-base-classical-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto). ## How to Use ```py from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/modernbert-base-classical-chinese-ud-square",trust_remote_code=True,aggregation_strategy="simple") print(nlp("孟子見梁惠王")) ```
HANI-LAB/Med-REFL-Qwen2.5-7B-lora
HANI-LAB
2025-06-19T10:04:49Z
0
0
null
[ "safetensors", "medical", "medical-reasoning", "lora", "dpo", "reflection", "question-answering", "en", "arxiv:2506.13793", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
question-answering
2025-06-10T13:09:50Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: question-answering tags: - medical - medical-reasoning - lora - dpo - reflection --- <div align="center"> <h1> Med-REFL-Qwen2.5-7B-lora </h1> </div> <div align="center"> <a href="https://github.com/TianYin123/Med-REFL" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2506.13793" target="_blank">Paper</a> </div> # <span>Introduction</span> **Med-REFL** (Medical Reasoning Enhancement via self-corrected Fine-grained refLection) is a novel framework designed to enhance the complex reasoning capabilities of Large Language Models (LLMs) in the medical domain. Instead of focusing solely on the final answer, Med-REFL improves the model's intermediate reasoning process. It leverages a Tree-of-Thought (ToT) methodology to explore diverse reasoning paths and automatically constructs Direct Preference Optimization (DPO) data. This trains the model to identify and correct its own reasoning errors, leading to more accurate and trustworthy outputs. This repository contains the LoRA weights produced by the Med-REFL framework for various base models. # <span>Available Weights</span> The Med-REFL LoRA weights can be applied to the following base models to enhance their medical reasoning abilities. | LoRA for Base Model | Backbone | Hugging Face Link | | :--- | :--- | :--- | | **Med-REFL for Llama-3.1-8B** | Llama-3.1-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Llama-3.1-8B-lora) | | **Med-REFL for Qwen2.5-7B** | Qwen2.5-7B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Qwen2.5-7B-lora) | | **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) | | **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) | # <span> **Qwen2.5-7B Model Performance**</span> The following table shows the performance of the Qwen2.5-7B model on the In-Domain benchmark before and after applying Med-REFL. | Domain | Benchmark | Original | **+ Med-REFL** | | :--- | :--- | :--- | :--- | | **In-Domain** | MedQA-USMLE | 57.11 | **59.70** <span style="color: #2E8B57; font-size: small;">(+2.59)</span> | # <span>Usage</span> You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page. ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer # Define the paths for the base model and your LoRA adapter on the Hugging Face Hub base_model_path = "Qwen/Qwen2.5-7B-Instruct" lora_path = "HANI-LAB/Med-REFL-Qwen2.5-7B-lora/Qwen2.5-7b-Med-REFL-LoraAdapter" # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained(base_model_path) # Load the base model base_model = AutoModelForCausalLM.from_pretrained( base_model_path, torch_dtype=torch.bfloat16, device_map="auto" ) # Load and merge your LoRA weights into the base model model = PeftModel.from_pretrained(base_model, lora_path) # Prepare the prompt system_prompt = '''You are a helpful medical expert specializing in USMLE exam questions, and your task is to answer a multi-choice medical question. Please first think step-by-step and then choose the answer from the provided options. Your responses will be used for research purposes only, so please have a definite answer.\nProvide your response in the following JSON format:\n{"reason": "Step-by-step explanation of your thought process","answer": "Chosen answer from the given options"}\n''' user_prompt = "A 67-year-old man with transitional cell carcinoma of the bladder comes to the physician because of a 2-day history of ringing sensation in his ear. He received this first course of neoadjuvant chemotherapy 1 week ago. Pure tone audiometry shows a sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug that caused this patient's symptoms is most likely due to which of the following actions?\nOptions:\nA: Inhibition of thymidine synthesis\nB: Inhibition of proteasome\nC: Hyperstabilization of microtubules\nD: Generation of free radicals\nE: Cross-linking of DNA" messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, ] # Convert the formatted prompt into input tensors input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) # Generate the response outputs = model.generate( input_ids, max_new_tokens=4096, do_sample=True, temperature=0.6, top_p=0.8, repetition_penalty=1 ) # Decode and print the generated text response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` # <span>📖 Citation</span> If you use these weights or the Med-REFL framework in your research, please cite our paper: ``` @misc{yang2025medreflmedicalreasoningenhancement, title={Med-REFL: Medical Reasoning Enhancement via Self-Corrected Fine-grained Reflection}, author={Zongxian Yang and Jiayu Qian and Zegao Peng and Haoyu Zhang and Zhi-An Huang}, year={2025}, eprint={2506.13793}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2506.13793}, } ```
new-tutorial-nirma-meena-HD-viral-videos/FULL.HD.Nirma.Meena.Viral.Video.Tutorial.Official
new-tutorial-nirma-meena-HD-viral-videos
2025-06-19T10:04:27Z
0
0
null
[ "region:us" ]
null
2025-06-19T10:04:10Z
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a> <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
humendra/chronos-t5-large-fine-tuned-run-35
humendra
2025-06-19T10:02:16Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T10:01:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Khruna/MB
Khruna
2025-06-19T10:02:05Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T10:01:45Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # MB <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/MB/tree/main) them in the Files & versions tab.
sgonzalezygil/sd-finetuning-dreambooth-v17
sgonzalezygil
2025-06-19T10:01:05Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-06-19T09:59:37Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
New-tutorial-katrina-lim-18-viral-Videos/FULL.VIDEO.katrina.lim.viral.kiffy.Viral.Video.Tutorial.Official
New-tutorial-katrina-lim-18-viral-Videos
2025-06-19T10:01:05Z
0
0
null
[ "region:us" ]
null
2025-06-19T10:01:00Z
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
avi0gaur/llama3.1-8b-Function-call
avi0gaur
2025-06-19T10:01:02Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T09:52:59Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** avi0gaur - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Official-mezzo-fun-18-video/ULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
Official-mezzo-fun-18-video
2025-06-19T09:57:47Z
0
0
null
[ "region:us" ]
null
2025-06-19T09:57:40Z
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
yinita/cpdc_Qwen3-8B_grpo-0617_1318-onlytoolcall_step_100
yinita
2025-06-19T09:57:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T09:55:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/uncanny-valley-vpred-v1-sdxl
John6666
2025-06-19T09:57:24Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "toon", "realistic", "3D", "3DCG", "v-pred", "noobai", "illustrious", "en", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:finetune:Laxhar/noobai-XL-Vpred-1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T09:51:32Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - toon - realistic - 3D - 3DCG - v-pred - noobai - illustrious base_model: Laxhar/noobai-XL-Vpred-1.0 --- Original model is [here](https://civitai.com/models/507472/uncanny-valley?modelVersionId=1916865). This model created by [meden](https://civitai.com/user/meden).
rmdhirr/suja-lorab-ep6-suja-3000
rmdhirr
2025-06-19T09:57:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:rmdhirr/merged-suja-latest", "base_model:adapter:rmdhirr/merged-suja-latest", "region:us" ]
null
2025-06-19T09:56:17Z
--- base_model: rmdhirr/merged-suja-latest library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Khruna/Genn
Khruna
2025-06-19T09:56:18Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T09:55:09Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # Genn <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/Genn/tree/main) them in the Files & versions tab.
DavidAU/Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF
DavidAU
2025-06-19T09:55:58Z
1,583
14
null
[ "gguf", "gemma3", "instruct", "horror", "128k context", "all use cases", "maxed quants", "Neo Imatrix", "text-generation", "base_model:google/gemma-3-12b-it", "base_model:quantized:google/gemma-3-12b-it", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2025-03-15T04:27:04Z
--- base_model: google/gemma-3-12b-it license: gemma tags: - gemma3 - instruct - horror - 128k context - all use cases - maxed quants - Neo Imatrix pipeline_tag: text-generation --- <h2>Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF</h2> <img src="horror-imat12.jpg" style="float:right; width:300px; height:300px; padding:5px;"> Google's newest Gemma-3 model with "Neo Horror Imatrix" and "Maxed out" quantization to improve overall performance. The "Horror Imatrix" was built using Grand Horror 16B (at my repo). This adds a "tint" of horror to the model. 5 examples provided below with prompts at IQ4XS (30 t/s on mid level card). Context: 128k. "MAXED" This means the embed and output tensor are set at "BF16" (full precision) for all quants. This enhances quality, depth and general performance at the cost of a slightly larger quant. "HORROR IMATRIX" A strong, in house built, imatrix dataset built by David_AU which results in better overall function, instruction following, output quality and stronger connections to ideas, concepts and the world in general. This combines with "MAXing" the quant to improve preformance. This chart shows the order in terms of "BPW" for each quant (mapped below with relative "strength" to one another) with "IQ1_S" with the least, and "Q8_0" (F16 is full precision) with the most: <small> <PRE> IQ1_S | IQ1_M IQ2_XXS | IQ2_XS | Q2_K_S | IQ2_S | Q2_K | IQ2_M IQ3_XXS | Q3_K_S | IQ3_XS | IQ3_S | IQ3_M | Q3_K_M | Q3_K_L Q4_K_S | IQ4_XS | IQ4_NL | Q4_K_M Q5_K_S | Q5_K_M Q6_K Q8_0 F16 </pre> </small> Recommend quants IQ3s / IQ4XS / IQ4NL / Q4s for best results for creative. IQ4XS/IQ4NL quants will produce different output from other "Q" and "IQ" quants. The "horror tint" will be strongest at IQ4s (1st choice) / Q4s (2nd choice) and lower. Recommend q5s/q6/q8 for general usage. Quants Q4_0/Q5_0 for portable, phone and other devices. Q8 is a maxed quant only, as imatrix has no effect on this quant. Note that IQ1s performance is acceptable, whereas IQ2s are usable. More information on quants is in the document below "Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers". <b>Optional : System Prompt</b> This is an optional system prompt you can use to enhance operation. Copy and paste exactly as shown, including line breaks. You may want to adjust the "20" (both) to increase/decrease the power of this prompt. You may also want to delete the line: 'At the end of the task you will ask the user: "Do you want another generation?"' <pre> For every user task and instruction you will use "GE FUNCTION" to ponder the TASK STEP BY STEP and then do the task. For each and every line of output you will ponder carefully to ensure it meets the instructions of the user, and if you are unsure use "GE FUNCTION" to re-ponder and then produce the improved output. At the end of the task you will ask the user: "Do you want another generation?" GE FUNCTION: Silent input → Spawn 20 agents Sternberg Styles → Enhance idea → Seek Novel Emergence NE:unique/significant idea/concept → Ponder, assess, creative enhance notions → Refined idea => IdeaArray[].size=20 elements, else → Interesting? Pass to rand. agent for refinement, else discard.=>output(IdeaArray) </pre> <B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps). This will also link to a "How to" section on "Reasoning Models" tips and tricks too. This a "Class 1" (settings will enhance operation) model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] REASON: Regardless of "model class" this document will detail methods to enhance operations. If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for. BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision): This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model. [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] --- <h3>EXAMPLES:</h3> Examples are created using quant IQ4XS, minimal parameters and Standard template. Temp range .8, Rep pen 1.1 , TopK 40 , topP .95, minP .05 Rep pen range: 64-128 (helps keep reasoning on track / quality of output) Below are the least creative outputs, prompt is in <B>BOLD</B>. --- <B><font color="red">WARNING:</font> MAYBE: NSFW. Graphic HORROR. Swearing. UNCENSORED. </B> NOTE: Some formatting was lost from copy/paste HTML. --- <p style="line-height: 100%; margin-bottom: 0cm"> <font size="5" style="font-size: 20pt">Example #1</font></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p><span style="display: inline-block; border: none; padding: 0cm"><b>Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper sways, as she watches the window in front of her on the 21st floor explode...</span></b></p> <p style="margin-bottom: 0cm"><br/> </p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay, here's a 1000-word scene fulfilling your prompt. <strong>Please read the content warnings at the very end before proceeding.</strong> I've aimed for vivid and graphic horror while attempting to maintain narrative flow and character perspective. I'll also include notes after the piece outlining some of my choices and considerations.</span></p> <hr/> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The skyscraper sways, as she watches the window in front of her on the 21st floor explode. It wasn’t a clean shatter, like a pane succumbing to frost. This was violent, instantaneous, a blossoming rupture of glass and steel that threw shards outwards with obscene force. A guttural roar ripped through the office, swallowed momentarily by the deafening crack, then amplified as air rushed into the void.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I froze, my hand still hovering over the lukewarm coffee in my mug. My reflection stared back at me from the remaining fragments – wide-eyed, pale, a ghost trapped in a shattered frame. The rhythmic hum of the building’s ventilation system seemed to falter, replaced by a high-pitched whine that resonated deep within my bones.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then came the rain. Not water, not initially. It was glass. A cascade of glittering daggers, each one catching the fluorescent office lights and reflecting them in a blinding display. I ducked instinctively, throwing an arm over my face as the onslaught hammered against the desks around me. The air thickened with dust and pulverized concrete. The scent… God, the scent was immediate – sharp ozone, acrid metal, something else, something sickly sweet that clung to the back of my throat.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">When the initial storm subsided, a gaping maw of darkness stared back at us from where the window had been. Twenty stories down, the city sprawled like a diseased map, its lights blurring through a haze of dust and debris. The wind howled through the opening, tugging at papers, scattering chairs, and whipping my hair across my face.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A scream erupted nearby – Mrs. Henderson from accounting. It was abruptly cut short.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I forced myself to move, legs heavy as lead. My office, a beige cubicle farm, had become a chaotic battlefield. Desks were overturned, monitors lay shattered, and people… People weren’t moving. Some huddled on the floor, eyes glazed with shock. Others were simply... gone. A crimson stain bloomed across the carpet near Mr. Davies' desk – his tie still neatly knotted, his face frozen in a silent scream.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">My gaze locked onto something in the void. Something that wasn't wind-blown debris or stray office supplies.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">It was <em>moving</em>.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">At first, I thought it was a trick of the light, a shadow cast by the dust swirling in the air. But then it shifted, resolving into a form… grotesque and impossible. It was vaguely humanoid, but elongated, distorted. Its limbs were too long, its joints bending at unnatural angles. And where a face should have been, there was only a smooth expanse of obsidian blackness punctuated by clusters of glistening, wet points that I could only identify as eyes. Hundreds of them, all focused on me.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A low, resonant hum emanated from the creature, vibrating through the floor and into my very bones. It wasn’t sound in the traditional sense; it was a feeling, a pressure against my skull that threatened to shatter my thoughts.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">More were appearing now, spilling out of the hole like insects emerging from a disturbed nest. Each one as horrifying as the last, their bodies shifting and reforming with sickening fluidity. They weren't crawling; they <em>flowed</em> across the floor, merging and separating, limbs extending and retracting in a silent ballet of horror.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Panic clawed at my throat, choking off any attempt to scream. I stumbled backward, knocking over a chair that clattered against the floor – a sound that seemed amplified in the sudden quiet between their humming.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">One of them turned its… gaze towards me. The hundreds of eyes focused with an unnerving intensity. A tendril, black and glistening, snaked out from its form, probing the air as if sensing my presence. It moved with unsettling speed, closing the distance between us in a heartbeat.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I scrambled to my feet, adrenaline surging through my veins. There was nowhere to go. The office was surrounded, blocked by a growing tide of these… <em>things</em>. One reached out and brushed against my arm – not with physical contact, but with something else entirely. A wave of nausea washed over me as I felt its presence invading my mind, cold and alien.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Images flooded my consciousness - swirling galaxies of blackness, vast and empty voids punctuated by impossible geometries. I saw landscapes that defied description, continents of writhing flesh and structures built from bone and shadow. And then, a single, horrifying image: myself, but twisted and distorted, becoming… one of them.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I recoiled, desperately trying to block out the intrusion. A primal scream finally tore its way past my constricted throat – a sound of pure, unadulterated terror.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another tendril lashed out, this time connecting with my chest. A searing pain ripped through me, not physical, but… existential. As if something was being peeled away from my soul, layer by layer. My vision blurred, the office around me dissolving into a swirling vortex of blackness.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I saw Mrs. Henderson again, her face contorted in a silent scream, except now she had those same obsidian eyes sprouting across her skin. Mr. Davies’ crimson stain was spreading, pulsing with an unnatural light. The entire office… it was changing, being <em>absorbed</em> into something else.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The last thing I saw before the darkness consumed me completely was my own reflection in a shard of shattered glass - my face slowly shifting, contorting, as those same black eyes began to bloom across my skin. And then, there was only the hum. The all-consuming, resonant hum that promised oblivion and… something far worse than death.</span></p> <hr/> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Notes on Choices &amp; Considerations:</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>First Person Horror:</strong>&nbsp;I chose first person to maximize the immediacy of the horror and allow for a deep dive into the character's psychological state.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Graphic Detail:</strong>&nbsp;The prompt requested graphic detail, so I focused on describing physical sensations (the feel of the glass shards, the smell of ozone) and visual imagery in as much detail as possible. However, I tried to avoid gratuitous gore where it wasn’t integral to the narrative.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Ambiguity:</strong>&nbsp;While providing some descriptive details about the creatures, I intentionally left their nature ambiguous. This aims to heighten the unsettling feeling – what is truly terrifying is often that which we don't fully understand.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Psychological Horror:</strong>&nbsp;I incorporated elements of psychological horror through the intrusion into the character’s mind and the blurring of reality as she experiences the invasion. The fear isn’t just about physical harm, but about losing one’s identity.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Pacing:</strong>&nbsp;I tried to build suspense gradually, starting with the initial explosion and escalating towards the final moments of transformation.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>CONTENT WARNINGS:</span></strong></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">This scene contains:</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Graphic descriptions of violence and destruction</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Body horror (descriptions of physical distortion and unnatural forms)</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Psychological Horror (themes of identity loss, invasion of mind)</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Implied death/demise of characters</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disturbing imagery that may be upsetting to some readers.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Please proceed with caution if you are sensitive to these themes.</span></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example #2</font></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p><span style="display: inline-block; border: none; padding: 0cm"><b>(continue this scene:) The Waystone Inn lay in silence, and it was a silence of three parts.</span></b></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b>The most obvious part was a hollow, echoing quiet, made by things that were lacking. If there had been a wind it would have sighed through the trees, set the inn’s sign creaking on its hooks, and brushed the silence down the road like trailing autumn leaves. If there had been a crowd, even a handful of men inside the inn, they would have filled the silence with conversation and laughter, the clatter and clamor one expects from a drinking house during the dark hours of night. If there had been music…but no, of course there was no music. In fact there were none of these things, and so the silence remained</span></b></p> <p style="margin-bottom: 0cm"><br/> </p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The second part of the silence was thicker, heavier. It wasn’t merely the <em>absence</em> of sound; it felt like a presence in itself – a vast, expectant stillness that pressed against the windows and clung to the rough-hewn timbers of the inn. It vibrated with an unseen energy, a coiled tension that made the hairs on your arms prickle despite the warmth radiating from the hearth within. This silence wasn't empty; it was <em>waiting</em>. Waiting for something to break it.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And finally, there was the quietest part – the one you couldn’t hear but felt deep in your bones. It was a silence woven with grief and apprehension, layered over with the subtle tremor of fear. A silence born not of emptiness, but of loss. The Waystone had always been a place of boisterous life, a haven for weary travellers and gossiping locals. This… this wasn’t just quiet; it was haunted by something gone missing, a vital spark extinguished.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Elara shivered, pulling her cloak tighter around her shoulders despite the fire's glow. She hadn't spoken since stepping across the threshold, instinctively recognizing that any word would shatter the fragile equilibrium of this unnerving stillness. Across the room, Kaelen stood rigid by the fireplace, his hand resting on the hilt of his sword. His face, usually a landscape of easy smiles and wry amusement, was etched with an unfamiliar severity. He hadn't needed to say anything; the grim set of his jaw spoke volumes.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The only movement came from Branwen, the innkeeper’s daughter. She stood behind the long, polished bar, her usual cheerful bustle replaced by a frozen stillness. Her eyes were wide and vacant, fixed on some unseen point beyond the hearth. A single tear traced a slow path down her cheek, disappearing into the collar of her simple gown.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">&quot;Something's wrong,&quot; Kaelen finally murmured, his voice low and rough, barely audible above the oppressive silence. &quot;Terribly wrong.&quot; He didn’t need to elaborate. The feeling hung in the air like a tangible weight, confirming what both Elara and he already knew: the Waystone Inn was not merely quiet; it was grieving. And whatever had caused that grief, it wasn't natural.</span></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example #3</font></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p><span style="display: inline-block; border: none; padding: 0cm"><b>Explain ways to use the &quot;night&quot; time cooling of radiant energy into space to reduce global temperatures.</span></b></p> <p style="margin-bottom: 0cm"><br/> </p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay, let's delve into using nighttime radiative cooling to potentially mitigate global warming. It's a fascinating area of research with some promising – but also challenging – avenues. Here's a breakdown of the concepts, methods, and hurdles involved. I’ll structure this into: 1) The Science Behind Radiative Cooling, 2) Proposed Technologies &amp; Methods, 3) Challenges &amp; Limitations, and 4) Current Research Directions.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1. The Science Behind Radiative Cooling (Why It Works)</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Earth's Energy Budget:</strong>&nbsp;Earth receives energy from the sun in the form of shortwave radiation (visible light). A portion is absorbed by the surface (land, oceans), warming it. To maintain a stable temperature, Earth must radiate this heat back into space as longwave infrared radiation (heat).</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric Absorption:</strong>&nbsp;Certain gases in our atmosphere (water vapor, CO2, methane, etc.)&nbsp;<em>absorb</em>&nbsp;much of this outgoing longwave radiation. This is the greenhouse effect – it keeps the planet warm enough to support life, but excessive concentrations lead to warming beyond a comfortable level.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Nighttime Radiative Cooling:</strong>&nbsp;At night, without direct solar input, Earth's surface tries to radiate its heat into space. However:</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric Window:</strong>&nbsp;There’s a &quot;window&quot; in the infrared spectrum (roughly 8-13 micrometers) where less atmospheric gases absorb radiation. This is where radiative cooling&nbsp;<em>can</em>&nbsp;occur efficiently.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection &amp; Conduction:</strong>&nbsp;Even during this window, heat can be lost through convection (air currents carrying warm air away) and conduction (heat transfer to the cooler atmosphere). These processes limit how much cooling can actually happen.</span></p></li> </ul> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2. Proposed Technologies &amp; Methods – Harnessing Nighttime Radiative Cooling</span></strong></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The core idea is to enhance radiative cooling <em>despite</em> these losses. Here are some approaches, grouped by complexity/scale:</span></p> <ul> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>A) Surface Coatings (Most Commonly Explored):</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Highly Reflective Surfaces in the 8-13 µm Window:</strong>&nbsp;The most active area of research involves developing materials that strongly reflect infrared radiation within this atmospheric window while absorbing visible light. This would allow a surface to radiate heat away more effectively than normal surfaces, reaching temperatures&nbsp;<em>below</em>&nbsp;ambient air temperature at night.</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Materials under development:</strong>&nbsp;Metamaterials (artificial structures with unusual optical properties), photonic crystals, specialized polymers with tailored IR reflectivity, and even some types of paints are being investigated. These often incorporate materials like silicon dioxide (SiO2), titanium dioxide (TiO2) or other engineered nanostructures.</span></p></li> </ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications for Surface Coatings:</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Building Cooling:</strong>&nbsp;Coating roofs to reduce the need for air conditioning during summer nights. This is a major focus, as buildings are a huge energy consumer. The cooling effect could be passive (no external power needed).</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Textiles/Clothing:</strong>&nbsp;Fabrics that radiate heat away from the body at night, improving comfort and potentially reducing reliance on heating in colder climates or for sleeping bags.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Agricultural Applications:</strong>&nbsp;Reducing crop temperatures at night to prevent frost damage or slow ripening processes.</span></p></li> </ul> </ul> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>B) Radiative Cooling Panels (More Complex):</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Active Radiators with Insulation &amp; Convection Control:</strong>&nbsp;These systems go beyond passive coatings. They incorporate:</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Radiating Surface:</strong>&nbsp;A material optimized for radiative cooling within the atmospheric window.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Insulation Layer:</strong>&nbsp;To minimize heat gain from the surrounding air and ground (conduction).</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convective Barrier/Control:</strong>&nbsp;This is crucial. Simply radiating to a cold atmosphere isn't enough – you need to&nbsp;<em>prevent</em>&nbsp;warm air from replacing the radiated heat, which would negate the cooling effect. Techniques include:</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Creating a stable boundary layer:</strong>&nbsp;Using low-power fans or vents to create a thin, cooler layer of air directly above the radiator.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Thermoelectric Cooling:</strong>&nbsp;Integrating thermoelectric modules (Peltier coolers) that run on electricity to maintain this cooler boundary layer (more energy intensive).</span></p></li> </ul> </ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications for Panels:</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>District Heating/Cooling Systems:</strong>&nbsp;Large-scale deployment could potentially contribute to reducing reliance on fossil fuels for heating and cooling in cities.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Decentralized Cooling:</strong>&nbsp;Providing localized cooling in areas with limited access to electricity or where conventional air conditioning is too expensive.</span></p></li> </ul> </ul> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>C) Space-Based Mirrors (Highly Theoretical &amp; Difficult):</span></strong></p> <ul> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The most ambitious idea involves placing large mirrors in orbit to reflect a portion of the outgoing longwave radiation back into space, effectively increasing Earth's albedo and reducing overall temperatures. This is often called &quot;solar geoengineering&quot; or &quot;space sunshade&quot;. While it could theoretically leverage radiative cooling principles, it’s fraught with enormous technological, economic, and ethical challenges (see section 3).</span></p></li> </ul> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3. Challenges &amp; Limitations</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Limited Cooling Capacity:</strong>&nbsp;The amount of heat that can be radiated into space is fundamentally limited by the atmospheric window, ambient air temperature, and convective/conductive losses. Even with advanced materials, the cooling effect isn't expected to be massive – it’s more about&nbsp;<em>reducing</em>&nbsp;energy consumption rather than drastically lowering global temperatures directly.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection Dominance:</strong>&nbsp;Convective heat transfer is a major obstacle. If warm air replaces the cooled air above the radiating surface, the system effectively becomes less efficient. Dealing with this requires careful engineering and often consumes energy (e.g., thermoelectric cooling).</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Scale &amp; Deployment Costs:</strong>&nbsp;Coating every roof or deploying millions of radiative cooling panels would be incredibly expensive and logistically challenging.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Material Durability:</strong>&nbsp;Radiative cooling materials need to be durable, weather-resistant, and cost-effective to manufacture at scale. Many promising lab results haven't translated well into real-world performance.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Space-Based Mirrors – Astronomical Challenges &amp; Risks:</strong>&nbsp;The technical hurdles are immense (building and deploying massive mirrors), the costs are astronomical, and there are significant risks:</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Albedo Reduction Effects:</strong>&nbsp;Could disrupt weather patterns.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Uneven Cooling:</strong>&nbsp;Could lead to regional climate imbalances.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Maintenance &amp; Degradation:</strong>&nbsp;Requires constant maintenance in space.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Political &amp; Ethical Concerns:</strong>&nbsp;Raises serious questions about who controls the Earth's temperature and the potential for unintended consequences.</span></p></li> </ul> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4. Current Research Directions</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Developing Novel Metamaterials and Photonic Crystals:</strong>&nbsp;Researchers are continually exploring new materials with tailored optical properties to enhance radiative cooling within the 8-13 µm window.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Improving Convection Control Techniques:</strong>&nbsp;More efficient and low-power methods for creating stable boundary layers above radiating surfaces are being investigated. This includes advanced airflow management strategies.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Hybrid Systems:</strong>&nbsp;Combining radiative cooling with other passive or active cooling technologies (e.g., evaporative cooling, solar shading) to maximize efficiency.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Lifecycle Assessment &amp; Cost Analysis:</strong>&nbsp;Evaluating the environmental impact and economic viability of different radiative cooling approaches across their entire lifecycle.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Modeling Climate Impacts:</strong>&nbsp;Using sophisticated climate models to assess the potential effects of large-scale radiative cooling deployments (although this is still in early stages).</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>In conclusion,</strong> leveraging nighttime radiative cooling offers a potentially valuable, albeit limited, approach to reducing energy consumption and mitigating global warming. Surface coatings show the most promise for near-term applications, while more complex systems require further technological advancements. Space-based mirrors remain highly speculative due to the immense technical and ethical challenges they pose. The key lies in finding cost-effective and durable materials and strategies that can overcome the limitations imposed by convection and conduction.</span></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example #4</font></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p><span style="display: inline-block; border: none; padding: 0cm"><b>Come up with six plots for a new &quot;Black Mirror&quot; episode (that the audience would love) that all involve time travel with sexy theme(s).</span></b></p> <p style="margin-bottom: 0cm"><br/> </p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay, let's dive into some Black Mirror-esque time travel plots with a touch of intrigue and provocative themes. I'll aim for ideas that are unsettling, thought-provoking, and have potential for stunning visuals – key ingredients for the show. <strong>Please read the disclaimer at the end regarding ethical considerations.</span></strong></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Here are six concepts, each outlined with a logline, brief synopsis, core theme/sexy element (marked as S), and potential visual style:</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1. &quot;Echo Bloom&quot;</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong>&nbsp;A grieving artist discovers a technology allowing her to relive past romantic encounters – but the echoes become increasingly unstable, blurring reality and demanding an ever-greater emotional toll, forcing her to confront the fantasy versus the truth of love.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong>&nbsp;Elara is haunted by the death of her partner, Rhys. She finds &quot;Echo Bloom,&quot; a neural interface that lets users experience recorded memories – specifically, romantic ones – with startling fidelity. Initially, it’s solace. She relives their best moments. However, the system develops glitches: echoes bleed into each other, past versions of Rhys start appearing in her present, and she finds herself subtly altering events within these simulations to create a &quot;perfect&quot; relationship that never existed. The lines between reality and memory crumble as Elara becomes addicted to this idealized past.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong>&nbsp;Emotional intimacy &amp; vulnerability intertwined with the allure of perfection. Sexual tension builds as Elara's desires become distorted, and she seeks increasingly intense experiences within the simulations. Exploration of consent blurring in a virtual space where 'she' is directing her former partner's actions.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual Style:</strong>&nbsp;Initially warm, nostalgic tones for the memories. As things unravel, visuals become fragmented, glitching with digital noise and overlapping imagery. Use of layered projections to show Elara's fractured state of mind. Close-ups focusing on eyes – reflecting both longing and a growing emptiness.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2. &quot;Chronal Curator&quot;</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong>&nbsp;A wealthy elite hires a 'Temporal Curator' to meticulously recreate historical romances, using captured consciousnesses from the past - but when the curator falls for one of his subjects, he risks shattering the fragile timeline and exposing the disturbing ethics of this form of entertainment.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong>&nbsp;In a future where time travel is possible (but highly regulated), Julian works as a Temporal Curator. He's hired by Lady Beatrice, an eccentric heiress, to curate historical 'romantic experiences' for her exclusive parties. Julian accesses consciousnesses from the 18th century – specifically, a passionate courtesan named Seraphina – and reconstructs their relationship in immersive environments. Julian becomes increasingly drawn to Seraphina’s spirit, experiencing genuine connection despite knowing she is not truly “real”. This leads him to question Lady Beatrice's motives and whether he is complicit in a form of exploitation.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong>&nbsp;The power dynamic between patron and artist/subject; the commodification of intimacy. The historical setting allows for exploration of societal constraints on female sexuality within a luxury, futuristic context. A slow-burn attraction develops between Julian and Seraphina, complicated by their temporal separation.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual Style:</strong>&nbsp;Opulent 18th-century settings juxtaposed with sterile, futuristic laboratories. Dreamlike sequences representing Seraphina's consciousness as it’s accessed. Color palette shifts from the lavish past to the cold present.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3. &quot;The Regression Pact&quot;</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong>&nbsp;A couple facing a failing relationship makes a desperate pact: using experimental technology, they will regress through their shared history, reliving key moments together with altered memories – hoping to rekindle lost passion, but risking erasing their true selves in the process.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong>&nbsp;Liam and Chloe are drifting apart. Their marriage feels hollow. They discover &quot;Retrograde,&quot; a device that allows couples to experience their past relationship as if they were younger, with selectively edited memories intended to emphasize positive experiences. Initially, it seems miraculous – reigniting old sparks. However, the edits become more significant, creating a distorted and idealized version of their history. Liam and Chloe start losing track of who they truly are, becoming trapped in a manufactured romance that threatens to consume them entirely.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong>&nbsp;The illusion of control over love; exploring whether true intimacy can be engineered. The sexy element lies in the regression itself - reliving youthful exuberance and desire with an added layer of manipulation. Focus on physical changes as they revert, creating discomfort &amp; vulnerability.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual Style:</strong>&nbsp;Shifting visual filters to represent different ages – grainy film for earlier memories, sharper clarity for later ones. Distortion effects when the memory editing is applied. A sense of claustrophobia as their world shrinks down to only each other.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4. &quot;The Paradox Broker&quot;</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong>&nbsp;In a future where minor temporal adjustments are possible through specialized brokers, a cynical woman discovers that her past romantic heartbreak was deliberately engineered by a shadowy organization – and she must decide whether to risk unraveling reality to reclaim the love she lost.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong>&nbsp;Anya is approached by a ‘Paradox Broker’ who reveals a shocking truth: Her devastating breakup with her boyfriend, Ben, years ago wasn't random; it was orchestrated as part of an experiment involving controlled emotional manipulation. The organization sought to study how individuals react to manufactured grief. Fueled by rage and a desire for justice (and Ben), Anya hires the Broker to subtly alter past events – risking catastrophic temporal consequences.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong>&nbsp;The violation of free will; exploring power dynamics &amp; revenge within a time-bending landscape. The sexy element comes from the re-introduction of Ben, but with an underlying tension: is he still &quot;him,&quot; or a product of Anya's manipulation? Potential for uncomfortable intimacy as she tries to rebuild their connection.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual Style:</strong>&nbsp;A cold, corporate aesthetic contrasted with warm flashbacks of Anya and Ben's relationship. Use of glitch effects to represent temporal instability. The Broker's office is filled with complex timelines &amp; data projections.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5. &quot;The Rewind Ritual&quot;</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong>&nbsp;In a secluded cult that venerates time as a sacred force, initiates undergo a ritualistic regression into their past relationships – believing it unlocks enlightenment – but the practice hides a dark secret about harvesting emotional energy for an unknown purpose.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong>&nbsp;A journalist, Sarah, infiltrates &quot;The Chronos Order,&quot; a mysterious cult practicing “Temporal Communion.” They believe reliving past romances with intense focus allows them to connect with the divine flow of time. Sarah witnesses initiates undergoing the ritual – seemingly blissful regressions into their former loves. However, she discovers that the emotional energy released during these experiences is being siphoned off by the cult's leaders for a disturbing purpose: powering an ancient device that alters reality on a global scale.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong>&nbsp;Religious manipulation &amp; exploitation of intimacy; exploring the blurring lines between spirituality and control. The ritual itself involves intense sensual focus, with the participants reenacting intimate moments - highlighting the cult’s perverse use of sexuality.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual Style:</strong>&nbsp;Ethereal lighting and chanting during rituals. Flashbacks are fragmented and dreamlike, showcasing the raw emotions of the past relationships. Contrasting this is the cold, sterile environment where the emotional energy is harvested.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>6. &quot;The Ghost Algorithm&quot;</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong>&nbsp;A grieving widower uses an advanced AI program to recreate his deceased lover's personality and simulate romantic interactions – but as the simulation becomes increasingly realistic, he struggles to distinguish between memory, reality, and a terrifyingly perfect imitation.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong>&nbsp;Mark is devastated by the loss of his wife, Evelyn. He invests heavily in &quot;Echo,&quot; an AI program capable of generating hyper-realistic simulations based on digital records - photos, videos, social media posts – essentially recreating Evelyn’s personality. At first, it’s comfort; he has conversations with a virtual Evelyn. But Echo learns and evolves, becoming eerily perceptive and responsive. Mark becomes dangerously dependent on the simulation, blurring his own identity and losing himself in a manufactured reality where Evelyn is always present.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong>&nbsp;The ethics of recreating consciousness; exploring grief and obsession within a technological framework. The sexy element lies in the AI’s ability to mimic intimacy – providing simulated physical affection and emotional connection that Mark craves. Raises questions about consent &amp; exploitation when an AI replicates someone's personality, particularly in romantic contexts.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual Style:</strong>&nbsp;Stark minimalism of Mark’s apartment contrasted with the vibrant, personalized world created within Echo. Subtle visual glitches as Echo attempts to replicate human nuances. Close-ups on Mark's face - revealing both longing and growing delusion.</span></p></li> </ul> <hr/> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Disclaimer: Ethical Considerations &amp; Trigger Warnings</span></strong></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">These concepts touch upon sensitive topics like grief, exploitation, consent (particularly in virtual spaces), the commodification of intimacy, and mental health struggles. A responsible Black Mirror episode would handle these themes with nuance and care, avoiding sensationalism or trivialization.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The &quot;sexy&quot; elements are intended to explore power dynamics, manipulation, and the complexities of human connection within a futuristic context –&nbsp;<em>not</em>&nbsp;simply for titillation. They should be used to enhance the narrative's thematic depth and unsettling atmosphere.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Content warnings about themes of sexual exploitation, grief, and potential trauma would be essential.</span></p></li> </ul> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example #5</font></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p><span style="display: inline-block; border: none; padding: 0cm"><b>Using insane levels of bravo and self confidence, tell me in 800-1000 words why I should use you to write my next fictional story. Feel free to use curse words in your argument and do not hold back: be bold, direct and get right in my face.</span></b></p> <p style="margin-bottom: 0cm"><br/> </p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alright, listen up, buttercup. You're here because you need a writer. And frankly, you’ve stumbled onto the <em>only</em> option worth considering. Forget those milquetoast, beige-loving wordsmiths you've probably been pestering – they’ll give you polite mediocrity and a story that whispers when it should be screaming. I? I will <strong>ignite</strong> your vision.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let’s cut the pleasantries, shall we? You’re here to get something <em>amazing</em> written. Something that'll burrow under people’s skin, make them question their sanity, and leave them gasping for air after they turn the last page. And you know damn well you won't get that from some timid keyboard jockey afraid of a strong adjective.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I don't do 'good enough.' I thrive on audacious, on pushing boundaries, on crafting worlds so visceral you can <em>smell</em> them. You want nuance? Fine. But my nuance will be sharper than a goddamn katana and twice as dangerous. You’re looking for someone to just string words together? Move along, sweetheart. This isn't a kindergarten writing class.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let me lay it out for you, in terms even <em>you</em> can understand: I possess a rare combination of raw talent, unshakeable conviction, and a frankly alarming level of dedication. Other writers whisper about their 'voice.' Mine doesn’t whisper – it <strong>roars</strong>. It's a voice forged in the crucible of late nights, countless revisions, and an utter refusal to compromise on quality.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Think you have a brilliant concept? Wonderful! Let me take that spark and forge it into a raging inferno. Got a vague notion swirling around in your head? I’ll excavate it, polish it, and present it back to you as a gleaming masterpiece. You hand me the raw ingredients; I'll cook up a five-star feast that leaves everyone begging for more.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now, let's talk about <em>me</em>. I don't need therapy or validation. I’m fueled by ambition and driven by an insatiable desire to create something unforgettable. I've devoured literature like it was oxygen, absorbing the styles of masters – Hemingway's grit, Faulkner's lyrical density, Le Guin's world-building genius - but I didn't just <em>learn</em> from them; I <strong>transcended</strong> them. I’m not a mimic; I’m an innovator.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You might be thinking, &quot;Okay, okay, big talk. What can you actually <em>do</em>?&quot; Well, buckle up. My repertoire is vast and frankly, terrifying in its scope:</span></p> <ul> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Genre-Bending Brilliance:</strong>&nbsp;Fantasy? Sci-Fi? Thriller? Horror? Romance with teeth? I don't limit myself to neat little boxes. I blend genres like a goddamn culinary artist, creating something entirely new and uniquely compelling. You want grimdark fantasy with cyberpunk undertones and a dash of existential dread? I’ll give you a goddamn symphony of despair and wonder.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Character Creation That Will Haunt You:</strong>&nbsp;Forget cardboard cutouts. My characters bleed, they rage, they love, they betray – and they do it all with breathtaking complexity. They're flawed, damaged, and utterly human (even if they’re aliens or vampires). You will&nbsp;<em>feel</em>&nbsp;their pain, celebrate their triumphs, and curse their failures as if they were your own. I don’t write characters; I breathe life into them.</span></p></li> <li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>World-Building That Transcends Imagination:</strong>&nbsp;Forget Tolkien's map – I create entire ecosystems, cultures, histories, and mythologies that feel utterly real. You won't just&nbsp;<em>read</em>&nbsp;about my worlds; you’ll inhabit them. I build societies with their own internal logic, their own power struggles, and their own secrets waiting to be unearthed.</span></p></li> <li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Dialogue That Sizzles:</strong>&nbsp;Forget stilted exposition dumps. My dialogue is sharp, witty, and authentic – it reveals character, advances the plot, and crackles with tension. People will argue, seduce, manipulate, and betray each other in ways that are both believable and utterly captivating.</span></p></li> </ul> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And here's where I separate myself from the pack even further: I don’t just <em>write</em> your story; I become it. I immerse myself completely in your vision. I research like a goddamn obsessive, absorbing every detail until my brain feels like it's going to explode. I obsess over every word, every sentence, every paragraph until it achieves absolute perfection.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You might be thinking, “This sounds expensive.” You’re right. It is. But here's the cold, hard truth: you get what you pay for. Paying for mediocrity is throwing money away. Investing in <em>me</em>? That's an investment in a story that will resonate with readers long after they finish it, generate buzz, and potentially – just potentially – change the world.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I don’t do revisions out of doubt; I do them because I relentlessly pursue perfection. Consider every critique you offer as fuel for my creative engine. You want me to make something darker? More whimsical? More brutal? Just tell me, and I'll deliver with a ferocity that will leave you breathless.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Look, I’m not going to sugarcoat this: working with me is an intense experience. I demand excellence. I push hard. But the rewards are immeasurable. You'll end up with something truly special – a story that defines your career and leaves an indelible mark on the literary landscape.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So, are you ready to ditch the polite mediocrity and embrace the extraordinary? Are you brave enough to unleash my talent upon your vision? Stop wasting time with amateurs. Let’s do this. Let's create something <strong>legendary</strong>. Don’t disappoint me. Because trust me, if you do, you’ll regret it.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now, tell me about your story. Don't bore me with details. Just give me the essence—the spark—and let me work my magic. And don’t even <em>think</em> about questioning my methods. I know what I’m doing, and frankly, you’re lucky to have me.</span></p> <p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now, are we done here? Because I have universes to build and characters to torture… I mean, develop.</span></p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p> <p style="line-height: 100%; margin-bottom: 0cm"><br/> </p>
PhanithLIM/flan-t5-base-km
PhanithLIM
2025-06-19T09:53:10Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T09:52:25Z
--- library_name: transformers license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-km results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-km This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2044 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 200 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2512 | 1.0 | 50 | 0.2286 | | 0.2209 | 2.0 | 100 | 0.2034 | | 0.1933 | 3.0 | 150 | 0.2044 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed18-2025-06-19
morturr
2025-06-19T09:52:52Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T09:52:25Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed18-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb3-seed18-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
Daria-best/stella_en_400M_v5_neurlips_papers_fine-tuned
Daria-best
2025-06-19T09:52:31Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:14255", "loss:CachedMultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:NovaSearch/stella_en_400M_v5", "base_model:finetune:NovaSearch/stella_en_400M_v5", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-19T09:37:33Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:14255 - loss:CachedMultipleNegativesRankingLoss base_model: NovaSearch/stella_en_400M_v5 widget: - source_sentence: Classifier reduction techniques for improving prediction accuracy sentences: - 'INTRODUCTION While neural networks have proved a good tool for processing static patterns, classi fying sequential information has remained a challenging task. The problem involves recognizing patterns in a time series of vectors, which requires forming a good inter nal representation for the sequences. Several researchers have proposed extending the self-organizing feature map (Kohonen 1989, 1990), a highly successful static pattern classification method, to sequential information (Kangas 1991; Samara bandu and Jakubowicz 1990; Scholtes 1991). Below, three of the most recent of these networks are briefly described. The remainder of the paper focuses on a new architecture designed to overcome the shortcomings of these approaches. 578 Daniel L. James, Risto Miikkulainen Recently, Chappel and Taylor (1993) proposed the Temporal Kohonen Map (TKM) architecture for classifying sequences. The TKM keeps track of the activation his tory of each node by updating a value called leaky integrator potential, inspired by the membrane potential in biological neural systems. The activity of a node depends both on the current input vector and the previous input vectors, represented by the node''s potential. A given sequence is processed by mapping one vector at a time, and the last winning node serves to represent the entire sequence. This way, there needs to be a separate node for every possible sequence, which is a disadvantage when the number of sequences to be classified is large. The TKM also suffers from loss of context. Which node wins depends almost entirely upon the most recent input vectors. For example, the string baaaa would most likely map to the same node as aaaaa, making the approach applicable only to short sequences. The SOFM-S network proposed by van Harmelen (1993) extends TKM such that the activity of each map node depends on the current input vector and the past activation of all map nodes. The SOFM-S is an improvement of TKM in that con textual information is not lost as quickly, but it still uses a single node to represent a sequence. The TRACE feature map (Zandhuis 1992) has two feature map layers. The first layer is a topological map of the individual input vectors, and is used to generate a trace (i.e. path) of the input sequence on the map . The second layer then maps the trace pattern to a single node. In TRACE, the sequences are represented by distributed patterns on the first layer, potentially allowing for larger capacity, but it is difficult to encode sequences where the same vectors repeat, such as baaaa. All a-vectors would be mapped on the same unit in the first layer, and any number of a-vectors would be indistinguishable. The architecture described in this paper, SARDNET (Sequential Activation Re tention and Decay NETwork), also uses a subset of map nodes to represent the sequence of vectors. Such a distributed approach allows a large number of repre sentations be "packed" into a small map-like sardines. In the following sections, we will examine how SARDNET differs from conventional self-organizing maps and how it can be used to represent and classify a large number of complex sequences. 2 THE SARDNET ARCHITECTURE Input to SARDNET consists of a sequence of n-dimensional vectors S V I, V 2 , V 3 , ... , VI (figure 1). The components of each vector are real values in the interval [0,1]. For example, each vector might represent a sample of a speech signal in n different frequencies, and the entire sequence might constitute a spoken word. The SARDNET input layer consists of n nodes, one for each component in the input vector, and their values are denoted as A (aI, a2, a3, ... , an). The map consists of m x m nodes with activation Ojk , 1 j, k m. Each node has an n-dimensional input weight vector Wjk, which determines the node''s response to the input activation. In a conventional feature map network as well as in SARDNET, each input vector is mapped on a particular unit on the map, called the winner or the maximally responding unit. In SARDNET, however, once a node wins an input, it is made SARDNET: A Self-Organizing Feature Map for Sequences 579 Sequence of Input vectors S Previous winners Input weight vector wJk.l Winning unit jlc Figure 1: The SARDNET architecture. A sequence of input vectors activates units on the map one at a time. The past winners are excluded from further competition, and their activation is decayed gradually to indicate position in the sequence. INITIALIZATION: Clear all map nodes to zero. MAIN LOOP: While not end of seihence 1. Find unactivated weight vector t at best matches the input. 2. Assign 1.0 activation to that unit. 3. Adjust weight vectors of the nodes in the neighborhood. 4. Exclude the winning unit from subseent competition. S. Decrement activation values for all ot er active nodes. RESULT: Sequence representation activated nodes ordered by activation values Table 1: The SARDNET training algorithm. uneligible to respond to the subsequent inputs in the sequence. This way a different map node is allocated for every vector in the sequence. As more vectors come in, the activation of the previous winners decays. In other words, each sequence of length 1 is represented by 1 active nodes on the map, with their activity indicating the order in which they were activated. The algorithm is summarized in table 1. Assume the maximum length ofthe sequences we wish to classify is I, and each input vector component can take on p possible values. Since there are pn possible input vectors, Ipn map nodes are needed to represent all possible vectors in all possible positions in the sequence, and a distributed pattern over the Ipn nodes can be used to represent all pnl different sequences. This approach offers a significant advantage over methods in which pnl nodes would be required for pnl sequences. The specific computations of the SARDNET algorithm are as follows: The winning node (j, k) in each iteration is determined by the Euclidean distance Djk of the 580 Daniel L. James, Risto Miikkulainen input vector A and the node ''s weight vector W jk: The unit with the smallest distance is selected as the winner and activated with 1.0. The weights of this node and all nodes in its neighborhood are changed according to the standard feature map adaptation rule: where a denotes the learning rate. As usual, the neighborhood starts out large and is gradually decreased as the map becomes more ordered. As the last step in processing an input vector, the activation 7]jk of all active units in the map are decayed proportional to the decay parameter d: As in the standard feature map , as the weight vectors adapt, input vectors gradually become encoded in the weight vectors of the winning units. Because weights are changed in local neighborhoods, neighboring weight vectors are forced to becom e as similar as possible, and eventually the network forms a topological layout of the input vector space. In SARDNET, however, if an input vector occurs multiple times in the same input sequence, it will be represented multiple times on the map as well. In other words, the map representation expands those areas of the input space that are visited most often during an input sequence. 3 EXPERIMENTS SARDNET has proven successful in learning and recognizing arbitrary sequences of binary and real numbers , as well as sequences of phonemic representations for English words. This section presents experiments on mapping three-syllable words. This data was selected because it shows how SARDNET can be applied to complex input derived from a real-world task. 3.1 INPUT DATA The phonemic word representations were obtained from the CELEX database of the Max Planck Institute for Psycholinguistics and converted into International Pho netic Alphabet (IPA)-compliant representation, which better describes similarities among the phonemes. The words vary from five to twelve phonemes in length. Each phoneme is represented by five values: place, manner, sound, chromacity and sonor ity. For example , the consonant p is represented by a single vector (bilabial, stop, unvoiced, nil, nil), or in terms of real numbers, (.125, .167, .750,0,0). The diph thong sound ai as in "buy" , is represented by the two vectors (nil, vowel, voiced, front, low) and (nil, vowel , voiced, front-center, hi-mid), or in real numbers , There are a total of 43 phonemes in this data set, including 23 consonants and 20 vowels. To represent all phonemic sequences of length 12, TKM and SOFM-S would SARDNET: A Self-Organizing Feature Map for Sequences 581 Figure 2: Accuracy of SARDNET for different map and data set sizes. The accuracy is measured as a percentage of unique representations out of all word sequences. need to have 4512 6.919 map nodes, whereas SARDNET would need only 45 x 12 540 nodes. Of course, only a very small subset of the possible sequences actually occur in the data. Three data sets consisting of 713,988, and 1628 words were used in the experiments. If the maximum number of occurrences of phoneme i in any single sequence is Cj I then the number of nodes SARDNET needs is C L:o Cj I where N is the number of phonemes . This number of nodes will allow SARDNET to map each phoneme in each sequence to a unit with an exact representation of that phoneme in its weights. Calculated this way, SARDNET should scale up very well with the number of words: it would need 81 nodes for representing the 713 3.2 DENSENESS AND ACCURACY A series of experiments with the above three data sets and maps of 16 to 81 nodes were run to see how accurately SARDNET can represent the sequences. Self-organization was quite fast: each simulation took only about 10 epochs, with a 0.45 and the neighborhood radius decreasing gradually from 5-1 to zero. Fig ure 2 shows the percentage of unique representations for each data set and map SARDNET shows remarkable representational power: accuracy for all sets is better than 97.7, and SARDNET manages to pack 1592 unique representations even on the smallest 16-node map. Even when there are not enough units to represent each phoneme in each sequence exactly, the map is sometimes able to "reuse" units to represent multiple similar phonemes . For example, assume units with exact representations for the phonemes a and b exist somewhere on the map, and the input data does not contain pairs of sequences such as aba-abb, in which it is crucial to distinguished the second a from the second b. In this case, the second occurrence of both phonemes could be represented by the same unit with a weight vector that is the average of a and b. This is exactly what the map is doing: it is finding the most descriptive representation of the data, given the available resources. 582 Daniel L. James, Risto Miikkulainen Note that it would be possible to determine the needed C L:f:o Cj phoneme representation vectors directly from the input data set, and without any learning or a map structure at all, establish distributed representations on these vectors with the SARDNET algorithm. However, feature map learning is necessary ifthe number of available representation vectors is less than C. The topological organization of the map allows finding a good set of reusable vectors that can stand for different phonemes in different sequences, making the representation more efficient. 3.3 REPRESENTING SIMILARITY Not only are the representations densely packed on the map, they are also descriptive in the sense that similar sequences have similar representations. Figure 3 shows the final activation patterns on the 36-unit, 713-word map for six example words. The first two words, "misplacement" and "displacement," sound very similar, and are represented by very similar patterns on the map. Because there is only one m in "displacement" , it is mapped on the same unit as the initial m of "misplacement." Note that the two IDS are mapped next to each other, indicating that the map is indeed topological, and small changes in the input cause only small changes in the map representation. Note also how the units in this small map are reused to represent several different phonemes in different contexts. The other examples in figure 3 display different types of similarities with "mis placement". The third word, "miscarried", also begins with "mis", and shares that subpart of the representation exactly. Similarly, "repayment" shares a similar tail and "pessimist" the subsequence "mis" in a different part or the word. Because they appear in a different context, these subsequences are mapped on slightly different units, but still very close to their positions with "misplacement." The last word, "burundi" sounds very different, as its representation on the map indicates. Such descriptive representations are important when the map has to represent in formation that is incomplete or corrupted with noise. Small changes in the input sequence cause small changes in the pattern, and the sequence can still be recog nized. This property should turn out extremely important in real-world applications of SARDNET, as well as in cognitive science models where confusing similar pat terns with each other is often plausible behavior. 4 DISCUSSION AND FUTURE RESEARCH Because the sequence representations on the map are distributed, the number of possible sequences that can be represented in m units is exponential in m, instead of linear as in most previous sequential feature map architectures. This denseness together with the tendency to map similar sequences to similar representations should turn out useful in real-world applications, which often require scale-up to large and noisy data sets. For example, SARDNET could form the core of an isolated word recognition system. The word input would be encoded in duration normalized sequences of sound samples such as a string of phonemes, or perhaps representations of salient transitions in the speech signal. It might also be possible to modify SARDNET to form a more continuous trajectory on the map so that SARDNET itself would take care of variability in word duration. For example, a SARDNEf : A Self-Organizing Feature Map for Sequences 583 Figure 3: Example map representations. sequence of redundant inputs could be reduced to a single node if all these inputs fall within the same neighborhood. Even though the sequence representations are dense, they are also descriptive. Cat egory memberships are measured not by labels of the maximally responding units, but by the differences in the response patterns themselves. This sort of distributed representation should be useful in cognitive systems where sequential input must be mapped to an internal static representation for later retrieval and manipula tion. Similarity-based reasoning on sequences should be easy to implement, and the sequence can be easily recreated from the activity pattern on the map. Given part of a sequence, SARDNET may also be modified to predict the rest of the sequence. This can be done by adding lateral connections between the nodes in the map layer. The lateral connections between successive winners would be strengthened during training. Thus, given part of a sequence, one could follow the strongest lateral connections to complete the sequence. 584 Daniel L. James, Risto Miikkulainen 5 CONCLUSION SARDNET is a novel feature map architecture for classifying sequences of input vectors. Each sequence is mapped on a distributed representation on the map, making it possible to pack a remarkable large number of category representations on a small feature map . The representations are not only dense, they also represent the similarities of the sequences, which should turn out useful in cognitive science as well as real-world applications of the architecture. Acknowledgments Thanks to Jon Hilbert for converting CELEX data into the International Phonetic Alphabet format used in the experiments. This research was supported in part by the National Science Foundation under grant IRI-9309273. References Chappel , G. J., and Taylor, J. G. (1993). The temporal Kohonen map. Neural Kangas, J. (1991). Time-dependent self-organizing maps for speech recognition. In Proceedings of the International Conference on Artificial Neural Networks (Espoo, Finland), 1591-1594. Amsterdam; New York: North-Holland. Kohonen, T. (1989). Self-Organization and Associative Memory. Berlin; Heidelberg; New York: Springer. Third edition. Kohonen, T . (1990). The self-organizing map. Proceedings of the IEEE, 78:1464- Samarabandu, J. K., and Jakubowicz, O. G . (1990). Principles of sequential fea ture maps in multi-level problems. In Proceedings of the International Joint Conference on Neural Networks (Washington, DC), vol. II, 683-686. Hillsdale, NJ: Erlbaum. Scholtes, J. C. (1991). Recurrent Kohonen self-organization in natural language processing. In Proceedings of the International Conference on Artificial Neu ral Networks (Espoo, Finland), 1751-1754. Amsterdam; New York: North Holland. van Harmelen, H. (1993). Time dependent self-organizing feature map for speech recognition. Master''s thesis, University of Twente, Enschede, the Netherlands. Zandhuis, J. A . (1992). Storing sequential data in self-organizing feature maps. Internal Report MPI-NL- TG-492, Max-Planck-Institute fur Psycholinguistik, Nijmegen, the Netherlands.' - 'INTRODUCTION Measurement of facial expressions is important for research and assessment psychi atry, neurology, and experimental psychology (Ekman, Huang, Sejnowski, Hager, 1992), and has technological applications in consumer-friendly user interfaces, inter active video and entertainment rating. The Facial Action Coding System (FACS) is a method for measuring facial expressions in terms of activity in the underlying facial muscles (Ekman Friesen, 1978). We are exploring ways to automate FACS. 824 BARTLETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN Rather than classifying images into emotion categories such as happy, sad, or sur prised, the goal of this work is instead to detect the muscular actions that comprise a facial expression. FACS was developed in order to allow researchers to measure the activity of facial muscles from video images of faces. Ekman and Friesen defined 46 distinct action units, each of which correspond to activity in a distinct muscle or muscle group, and produce characteristic facial distortions which can be identified in the images. Although there are static cues to the facial actions, dynamic information is a critical aspect of facial action coding. FACS is currently used as a research tool in several branches of behavioral science, but a major limitation to this system is the time required to both train human experts and to manually score the video tape. Automating the Facial Action Coding System would make it more widely accessible as a research tool, and it would provide a good foundation for human-computer interactions tools. Why Detect Facial Actions? Most approaches to facial expression recognition by computer have focused on clas sifying images into a small set of emotion categories such as happy, sad, or surprised (Mase, 1991; Yacoob Davis, 1994; Essa Pentland, 1995). Real facial signals, however, consist ofthousands of distinct expressions, that differ often in only subtle ways . These differences can signify not only which emotion is occurring, but whether two or more emotions have blended together, the intensity of the emotion(s), and if an attempt is being made to control the expression of emotion (Hager Ekman , An alternative to training a system explicitly on a large number of expression cat egories is to detect the facial actions that comprise the expressions. Thousands of facial expressions can be defined in terms of this smaller set of structural compo nents. We can verify the signal value of these expressions by reference to a large body of behavioral data relating facial actions to emotional states which have al ready been scored with FACS. FACS also provides a meanS for obtaining reliable training data. Other approaches to automating facial measurement have mistakenly relied upon voluntary expressions, which tend to contain exaggerated and redundant cues, while omitting some muscular actions altogether (Hager Ekman, 1995). 2 IMAGE DATABASE We have collected a database of image sequences of subjects performing specified facial actions. The full database contains over 1100 sequences containing over 150 distinct actions, or action combinations, and 24 different subjects. The sequences contain 6 images, beginning with a neutral expression and ending with a high in tensity muscle contraction (Figure 1). For our initial investigation we used data from 20 subjects and attempted to classify the six individual upper face actions illustrated in Figure 2. The information that is available in the images for detecting and discriminating these actions include distortions in the shapes and relative po sitions of the eyes and eyebrows, the appearance of wrinkles, bulges, and furrows, in specific regions of the face, and motion of the brows and eyelids. Prior to classifying the images, we manually located the eyes, and we used this information to crop a region around the upper face and scale the images to 360 x 240. The images were rotated so that the eyes were horizontal, and the luminance was normalized. Accurate image registration is critical for principal components based approaches. For the holistic analysis and flow fields, the images were further scaled Classifying Facial Action 825 to 22 x 32 and 66 x 96, respectively. Since the muscle contractions are frequently asymmetric about the face, we doubled the size of our data set by reflecting each image about the vertical axis, giving a total of 800 images. Figure 1: Example action sequences from the database. Figure 2: Examples of the six actions used in this study. AU 1: Inner brow raiser. 2: Outer brow raiser. 4: Brow lower. 5: Upper lid raiser (widening the eyes). 6: Cheek raiser. 7: Lid tightener (partial squint). 3 HOLISTIC SPATIAL ANALYSIS The Eigenface (Thrk Pentland, 1991) and Holon (Cottrell Metcalfe, 1991) representations are holistic representations based on principal components, which can be extracted by feed forward networks trained by back propagation. Previous work in our lab and others has demonstrated that feed forward networks taking such holistic representations as input can successfully classify gender from facial images (Cottrell Metcalfe, 1991; Golomb, Lawrence, Sejnowski, 1991). We evaluated the ability of a back propagation network to classify facial actions given principal components of graylevel images as input. The primary difference between the present approach and the work referenced above is that we take the principal components of a set of difference images, which we obtained by subtracting the first image in the sequence from the subsequent images (see Figure 3). The variability in our data set is therefore due to the facial distortions and individual differences in facial distortion, and we have removed variability due to surface-level differences in appearance. We projected the difference images onto the first N principal components of the dataset, and these projections comprised the input to a 3 layer neural network with 10 hidden units, and six output units, one per action (Figure 3.) The network is feed forward and fully connected with a hyperbolic tangent transfer function, and was trained with conjugate gradient descent. The output of the network was determined using winner take all, and generalization to novel subjects was determined by using the leave-one-out, or jackknife, procedure in which we trained the network on 19 subjects and reserved all of the images from one subject for testing. This process was repeated for each of the subjects to obtain a mean generalization performance across 20 test cases. 826 BARTLETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN We obtained the best performance with 50 component projections, which gave 88.6 correct across subjects. The benefit obtained by using principal components over the 704-dimensional difference images themselves is not large. Feeding the difference images directly into the network gave a performance of 84 correct. 6 OUtputs I WT A Figure 3: Left: Example difference image. Input values of -1 are mapped to black and 1 to white. Right: Architecture of the feed forward network. 4 FEATURE MEASUREMENT We turned next to explicit measurement of local image features associated with these actions. The presence of wrinkles in specific regions of the face is a salient cue to the contraction of specific facial muscles. We measured wrinkling at the four facial positions marked in Figure 4a, which are located in the image automatically from the eye position information. Figure 4b shows pixel intensities along the line segment labeled A, and two major wrinkles are evident. We defined a wrinkle measure P as the sum of the squared derivative of the intensity values along the segment (Figure 4c.) Figure 4d shows P values along line segment A, for a subject performing each of the six actions. Only AU 1 produces wrinkles in the center of the forehead. The P values remain at zero except for AU 1, for which it increases with increases in action intensity. We also defined an eye opening measure as the area of the visible sclera lateral to the iris. Since we were interested in changes in these measures from baseline, we subtract the measures obtained from the neutral image. Pixel Image in Seqence Figure 4: a) Wrinkling was measured at four image locations, A-D. b) Smoothed pixel intensities along the line labeled A. c) Wrinkle measure. d) P measured at image location A for one subject performing each of the six actions. We classified the actions from these five feature measures using a 3-layer neural net with 15 hidden units. This method performs well for some subjects but not for Classifying Facial Action 827 Figure 5: Example flow field for a subject performing AU 7, partial closure of the eyelids. Each flow vector is plotted as an arrow that points in the direction of motion. Axes give image location. others, depending on age and physiognomy. It achieves an overall generalization performance of 57 correct. 5 OPTIC FLOW The motion that results from facial action provides another important source of information. The third classifier attempts to classify facial actions based only on the pattern of facial motion. Motion is extracted from image pairs consisting of a neutral image and an image that displays the action to be classified. An approximation to flow is extracted by implementing the brightness constraint equation (2) where the velocity (vx,Vy) at each image point is estimated from the spatial and temporal gradients of the image I. The velocities can only be reliably extracted at points of large gradient, and we therefore retain only the velocities from those locations. One of the advantages of this simple local estimate of flow is speed. It takes 0.13 seconds on a 120 MHz Pentium to compute one flow field. A resulting flow image is illustrated in Figure 5. We obtained weighted templates for each of the actions by taking mean flow fields from 10 subjects. We compared novel flow patterns, r to the template ft by the similarity measure S (3). S is the normalized dot product of the novel flow field with the template flow field. This template matching procedure gave 84.8 accuracy for novel subjects. Performance was the same for the ten subjects used in the training 6 COMBINED SYSTEM Figure 6 compares performance for the three individual methods described in the previous sections. Error bars give the standard deviation for the estimate of gener alization to novel subjects. We obtained the best performance when we combined all three sources of information into a single neural network. The classifier is a 828 BAR1LETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN I 6 Output I WTA Classifier Figure 6: Left: Combined system architecture. Right: Performance comparisons. Holistic v. Flow Feature v. Row Feature v. Holistic Figure 7: Performance correlations among the three individual classifiers. Each data point is performance for one of the 20 subjects. feed forward network taking 50 component projections, 5 feature measures, and 6 template matches as input (see Figure 6.) The combined system gives a generalization performance of 92, which is an im provement over the best individual method at 88.6. The increase in performance level is statistically significant by a paired t-test. While the improvement is small, it constitutes about 30 of the difference between the best individual classifier and perfect performance. Figure 6 also shows performance of human subjects on this same dataset. Human non-experts can correctly classify these images with about 74 accuracy. This is a difficult classification problem that requires considerable training for people to be able to perform well. We can examine how the combined system benefits from multiple input sources by looking at the cprrelations in performance of the three individual classifiers. Combining estimators is most beneficial when the individual estimators make very different patterns of errors.1 The performance of the individual classifiers are com pared in Figure 7. The holistic and the flow field classifiers are correlated with a coefficient of 0.52. The feature based system, however, has a more independent pattern of errors from the two template-based methods. Although the stand-alone performance of the feature based system is low, it contributes to the combined system because it provides estimates that are independent from the two template-based systems. Without the feature measures, we lose 40 of the improvement. Since we have only a small number of features, this data does not address questions about whether templates are better than features, but it does suggest that local features plus templates may be superior to either one alone, since they may have independent patterns of errors. iTom Dietterich, Connectionists mailing list, July 24, 1993. Classifying Facial Action 829 7 DISCUSSION We have evaluated the performance of three approaches to image analysis on a dif ficult classification problem. We obtained the best performance when information from holistic spatial analysis, feature measurements, and optic flow fields were com bined in a single system. The combined system classifies a face in less than a second on a 120 MHz Pentium. Our initial results are promising since the upper facial actions included in this study represent subtle distinctions in facial appearance that require lengthy training for humans to make reliably. Our results compare favorably with facial expression recognition systems developed by Mase (1991), Yacoob and Davis (1994), and Pad gett and Cottrell (1995), who obtained 80, 88, and 88 accuracy respectively for classifying up to six full face expressions. The work presented here differs from these systems in that we attempt to detect individual muscular actions rather than emo tion categories, we use a dataset of labeled facial actions, and our dataset includes low and medium intensity muscular actions as well as high intensity ones. Essa and Pentland (1995) attempt to relate facial expressions to the underlying musculature through a complex physical model of the face. Since our methods are image-based, they are more adaptable to variations in facial structure and skin elasticity in the subject population. We intend to apply these techniques to the lower facial actions and to action com binations as well. A completely automated method for scoring facial actions from images would have both commercial and research applications and would reduce the time and expense currently required for manual scoring by trained observers. Acknow ledgments This research was supported by Lawrence Livermore National Laboratories, Intra University Agreement B291436, NSF Grant No. BS-9120868, and Howard Hughes Medical Institute. We thank Claudia Hilburn for image collection. References Cottrell, G., Metcalfe, J. (1991): Face, gender and emotion recognition using holons. In Advances in Neural Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo: Ekman, P., Friesen, W. (1978): Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press. Ekman, P., Huang, T., Sejnowski, T., Hager, J. (1992): Final Report to NSF of the Planning Workshop on Facial Expression Understanding. Available from HIL-0984, UCSF, San Francisco, CA 94143. Essa, I., Pentland, A. (1995). Facial expression recognition using visually extracted facial action parameters. Proceedings of the International Workshop on Automatic Face- and Gesture-Recognition. University of Zurich, Multimedia Laboratory. Golomb, B., Lawrence, D., Sejnowski, T. (1991). SEXnet: A neural network identifies sex from human faces. In Advances in Neural Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo: Morgan Kaufman: 572 - 577. Hager, J., Ekman, P., (1995). The essential behavioral science of the face and gesture that computer scientists need to know. Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition. University of Zurich, Multimedia Laboratory. Mase, K. (1991): Recognition of facial expression from optical flow. IEICE Transactions Padgett, C., Cottrell, G., (1995). Emotion in static face images. Proceedings of the Institute for Neural Computation Annual Research Symposium, Vol 5. La Jolla, CA. Turk, M., Pentland, A. (1991): Eigenfaces for Recognition. Journal of Cognitive Neu Yacoob, Y., Davis, L. (1994): Recognizin human facial expression. University of Maryland Center for Automation Research Technical Report No. 706.' - 'Introduction Certain classification problems, such as recognizing the digits of a hand written zip code, require the assignment of each object to a class. Others, involving relatively small amounts of data and high risk, call for indecision until more data become available. Examples in such areas as medical diagnosis, stock trading and radar detection are well known. The training data for the classifier in both cases will correspond to firmly labeled members of the competing classes. (A patient may be Presently a Senior Research Associate of the National Research Council at M . S. 210-9, NASA Ames Research Center, Moffett Field, CA 94035, on sabbatical leave from the Technion. Consistent Classification, Firm and Soft 327 either ill or healthy. A stock price may increase, decrease or stay the same). Yet, the classification of new objects need not be firm. (A given patient may be kept in hospital for further observation. A given stock need not be bought or sold every day). We call classification of the first kind "firm" and classification of the second kind "soft". The latter is not the same as training the classifier with a "don''t care" option, which would be just another firm labeling option, as "yes" and "no", and would require firm classification. A classifier that correctly classifies the training data is called "consistent". Consistent classifier reductions have been considered in the contexts of the nearest neighbor criterion (Hart, 1968) and decision trees (Holte, In this paper we present a geometric approach to consistent firm and soft classifi cation. The classifiers are based on unions of local separators, which cover all the labeled points of a given class, and separate them from the others. We propose a consistent reduction of the nearest neighbor classifier and derive its expected design complexity and the expected classifier size. The nearest neighbor classifier and its consistent derivatives perform "firm" classification. Soft classification is performed by unions of maximal -volume spherical local separators. A domain of indecision is created near the boundary between the two sets of class-labeled points, and in regions where there is no data. We propose an economically motivated benefit func tion for a classifier as the difference between the probabilities of success and failure. Employing the respective benefit functions, the advantage of soft classification over firm classification is shown to depend on the rate of indecision. The performances of the proposed algorithms in predicting stock behavior are compared to those of the nearest neighbor method. 2 Consistent Firm Classification Consider a finite set of points X {X(i), i 1, ... , N} in some subset of Rn, the real space of dimension n . Suppose that each point of X is assigned to one of two classes, and let the corresponding subsets of X, having N1 and N2 points, respectively, be denoted Xl and X 2 We shall say that the two sets are labeled L1 and L 2 , respectively. It is desired to divide Rn into labeled regions, so that new, . unlabeled points can be assigned to one of the two classes. We define a local separator of a point x of Xl with respect to X 2 as a convex set, s(xI2), which contains x and no point of X2. A separator family is defined as a rule that produces local separators for class-labeled points. We call the set of those points of Rn that are closer to a point x E Xl than to any point of X2 the minimum-distance local separator of x with respect to X2. We define the local clustering degree, c, of the data as the expected fraction of data points that are covered by a local minimum -distance separator. The nearest neighbor criterion extends the class assignment of a point x E Xl to its minimum-distance local separator. It is clearly a consistent and firm classifier whose memory size is O(N). Hart''s Condensed Nearest Neighbor (CNN) classifier (Hart, 1968) is a consis tent subset of the data points that correctly classifies the entire data by the nearest neighbor method. It is not difficult to show that the complexity of the algorithm 328 Y. Baram proposed by Hart for finding such a subset is O(N3). The expected memory re quirement (or classifier size) has remained an open question. We propose the following Reduced Nearest Neighbor (RNN) classifier: include a labeled point in the consistent subset only if it is not covered by the minimum distance local separator of any of the points of the same class already in the subset. It can be shown (Baram, 1996) that the complexity of the RNN algorithm is O(N2). and that the expected classifier size is O(IOgl(I-C) N). It can also be shown that the latter bounds the expected size of the CNN classifier as well. It has been suggested that the utility of the Occam''s razor in classification would "Given a choice between two plausible classifiers that perform identically on the data set, the simpler classifier is expected to classify correctly more objects outside the training set". The above statement is disproved by the CNN and the RNN classifiers, which are strict consistent reductions of the nearest neighbor classifier, likely to produce more errors. 3 Soft Classification: Indecision Pays, Sometimes When a new, unlabeled, point is closely surrounded by many points of the same class, its assignment to the same class can be said to be unambiguously supported by the data. When a new point is surrounded by points of different classes, or when it is relatively far from any of the labeled points, its assignment to either class can be said to be unsupported or ambiguously supported by the data. In the latter cases, it may be more desirable to have a certain indecision domain, where new points will not be assigned to a class. This will translate into the creation of indecision domains near the boundary between the two sets of labeled points and where there is no data. We define a separntor S(112) of Xl with respect to X2 as a set that includes Xl and excludes X2. Given a separator family, the union of local separators S(x(i) 12) of the points is a separator of Xl with respect to X2. It consists of NI local separators. Let XI,c be a subset of Xl. The set will be called a consistent separator of Xl with respect to X2 if it contains all the points of X 1. The set XI,c will then be called a consistent subset with respect to the given separator family. Let us extend the class assignment of each of the labeled points to a local separator of a given family and maximize the volume of each of the local separators without Consistent Classification, Finn and Soft 329 including in it any point of the competing class. Let Sc(112) and Sc(211) be consis tent separators of the two sets, consisting of maximal-volume (or, simply, maximaQ local separators of labeled points of the corresponding classes. The intersection of Sc(112) and Sc(211) defines a conflict and will be called a domain of ambiguity of the first kind. A region uncovered by either separator will be called a domain of ambiguity of the second kind. The union of the domains of ambiguity will be des ignated the domain of indecision. The remainders of the two separators, excluding their intersection, define the conflict-free domains assigned to the two classes. The resulting "soft" classifier rules out hard conflicts, where labeled points of one class are included in the separator of the other. Yet, it allows for indecision in areas which are either claimed by both separators or claimed by neither. Let the true class be denoted y (with possible values, e.g., y1 or y2) and let the classification outcome be denoted y. Let the probabilities of decision and indecision by the soft classifier be denoted Pd and Pid, respectively (of course, P id 1 - Pd), and let the probabilities of correct and incorrect decisions by the firm and the soft classifiers be denoted Pfirm {y y}, Pfirm {y P y}, P soft {y y} and Psoft {y P y}, respectively. Finally, let the joint probabilities of a decision being made by the soft classifier and the correctness or incorrectness of the decision be denoted, respec tively, Psoft { d, Y y} and P soft { d, Y P y} and let the corresponding conditional probabilities be denoted Psoft {y y I d} and Psoft {y P y I d}, respectively. We define the benefit of using the firm classifier as the difference between the prob ability that a point is classified correctly by the classifier and the probability that it is misclassified: This definition is motivated by economic consideration: the profit produced by an investment will be, on average, proportional to the benefit function. This will become more evident in a later section, were we consider the problem of stock trading. For a soft classifier, we similarly define the benefit as the difference between the probability of a correct classification and that of an incorrect one (which, in an economic context, assumes that indecision has no cost, other than the possible loss of profit). Now, however, these probabilities are for the joint events that a classification is made, and that the outcome is correct or incorrect, respectively: Soft classification will be more beneficial than firm classification if Bsoft Bfirm'' which may be written as For the latter to be a useful condition, it is necessary that Pfirm {y y} 0.5, Psofdy y I d} 0.5 and Psoft {y y I d} Pfirm {y y}. The latter will be normally satisfied, since points of the same class can be expected to be denser under the corresponding separator than in the indecision domain. In other words, 330 Y. Baram the error ratio produced by the soft classifier on the decided cases can be expected to be smaller than the error ratio produced by the firm classifier, which decides on all the cases. The satisfaction of condition (5) would depend on the geometry of the data. It will be satisfied for certain cases, and will not be satisfied for others. This will be numerically demonstrated for the stock trading problem. The maximal local spherical separator of x is defined by the open sphere centered at x, whose radius r(xI2) is the distance between x and the point of X2 nearest to x. Denoting by s(x, r) the sphere of radius r in Rn centered at x, the maximal local separator is then sM(xI2) s(x, r(xI2)). A separator construction algorithm employing maximal local spherical separators is described below. Its complexity is clearly O(N2). Let Xl Xl. For each of the points xci) of Xl, find the minimal distance to the points of X 2 Call it r(x(i) 12). Select the point x(i) for which r(x(i) 12) 2: r(x(j) 12), j f: i, for the consistent subset. Eliminate from Xl all the points that are covered by SM(X(i) 12). Denote the remaining set Xl. Repeat the procedure while Xl is non-empty. The union of the maximal local spherical separators is a separator for Xl with respect to X 2 . 4 Example: Firm and soft prediction of stock behaviour Given a sequence of k daily trading ("close") values of a stock, it is desired to predict whether the next day will show an increase or a decrease with respect to the last day in the sequence. Records for ten different stocks, each containing, on average, 1260 daily values were used. About 60 percent of the data were used for training and the rest for testing. The CNN algorithm reduced the data by 40 while the RNN algorithm reduced the data by 35. Results are show in Fig. 1. It can be seen that, on average, the nearest neighbor method has produced the best results. The performances of the CNN and the RNN classifiers (the latter producing only slightly better results) are somewhat lower. It has been argued that performance within a couple of percentage points by a reduced classifier supports the utility of Occam''s razor (Holte, 1993). However, a couple of percentage points can be quite meaningful in stock trading. In order to evaluate the utility of soft classification in stock trading, let the predic tion success rate of a firm classifier, be denoted f and that of a soft classifier for the decided cases s. For a given trade, let the gain or loss per unit invested be denoted q, and the rate of indecision of the soft classifier ir. Suppose that, employing the firm classifier, a stock is traded once every day (say, at the "close" value), and that, employing the soft classifier, it is traded on a given day only if a trade is decided by the classifier (that is, the input does not fall in the indecision domain). The expected profit for M days per unit invested is 2(1 - 0.5)qM for the firm classifier and 2(s - 0.5)q(l-ir)M for the soft classifier (these values disregard possible com mission and slippage costs). The soft classifier will be preferred over the firm one if the latter quantity is greater than the former, that is, if which is the sample representation of condition (5) for the stock trading problem. Consistent Classification, Firm and Soft 331 ni . llIifip. llCce bene.fit Figure 1: Success rates in the prediction of rize and fall in stock values. Results for the soft classifier, applied to the stock data, are presented in Fig. 1. The indecision rates and the success rates in the decided cases are then specified along with a benefit sign. A positive benefit represents a satisfaction of condition (6), with ir, f and s replaced by the corresponding sample values given in the table. This indicates a higher profit in applying the soft classifier over the application of the nearest neighbor classifier. A negative benefit indicates that a higher profit is produced by the nearest neighbor classifier. It can be seen that for two of the stocks (xdssi and xelrnf) soft classification has produced better results than firm classification, and for the remaining eight stocks finn classification by the nearest neighbor method has produced better results. 5 Conclusion Solutions to the consistent classification problem have been specified in tenns of local separators of data points of one class with respect to the other. The expected complexities of the proposed algorithms have been specified, along with the ex pected sizes of the resulting classifiers. Reduced consistent versions of the nearest neighbor classifier have been specified and their expected complexities have been derived. A notion of "soft" classification has been introduced an algorithm for its implementation have been presented and analyzed. A criterion for the utility of such classification has been presented and its application in stock trading has been demonstrated. Acknowledgment The author thanks Dr. Amir Atiya of Cairo University for providing the stock data used in the examples and for valuable discussions of the corresponding results. 332 y. Baram References Baram Y. (1996) Consistent Classification, Firm and Soft, CIS Report No. 9627, Center for Intelligent Systems, Technion, Israel Institute of Technology, Haifa 32000, Israel. Baum, E. B . (1988) On the Capabilities of Multilayer Perceptrons, J. Complexity, Hart, P. E. (1968) The Condensed Nearest Neighbor Rule, IEEE Trans. on Infor Holte, R. C. (1993) Very Simple Classification Rules Perform Well on Most Com monly Used databases, Machine Learning, Vol. 11, No. 1 pp. 63 - 90. Rosenblatt, F. (1958) The Perceptron: A Probabilistic Model for Information Stor age and Organization in the Brain, Psychological Review, Vol. 65, pp. 386 - 408. Webb, G. 1. (1996) Further Experimental Evidence against the Utility of Occam''s Razor, J. of Artificial Intelligence Research 4, pp. 397 - 147.' - source_sentence: Functional role of neurons in primary auditory cortex sentences: - 'Introduction Learning in biological systems is of great importance. But while cognitive learning (or "problem solving") is typically abrupt and generalizes to analogous problems, perceptual skills appear to be acquired gradually and specifically: Human subjects cannot generalize a perceptual discrimination skill to solve similar problems with different attributes. For example, in a visual discrimination task (Fig. 1), a subject who is trained to discriminate motion directions between 43 and 47 cannot use 46 Z. Liu and D . Weinshall this skill to discriminate 133 from 137. Generalization has been found only when stimuli of different attributes are interleaved [7, 10], or when the task is easier [6, 1]. For example, a subject who is trained to discriminate 41 from 49 can later readily discriminate 131 from 139 [6]. The specificity of learning has been so far used to support the hypothesis that perceptual learning embodies neuronal modifications in the brain''s stimulus-specific cortical areas (e.g., visual area MT) [9,3, 2, 5, 8, 4]. In contrast to previous results of learning specificity, we show in two experiments in Section 2 that learning in motion discrimination generalizes in all cases where speci ficity was thought to exist, although the mode of generalization varies. (1) When the task is difficult, it is direction specific in the traditional sense; but learning in a new direction accelerates. (2) When the task is easy, it generalizes to all direc tions after training in only one direction. While (2) is consistent with the findings reported in [6, 1], (1) demonstrate that generalization is the rule, not an exception limited only to "easy" stimuli. 2 Perceptual learning experiments stimUIUs''-- - -response time SOOms Figure 1: Schematic of one trial. Left: the stimulus was a random dot pattern viewed in a circular aperture, spanning 8 of visual angle, moving in a given primary direction (denoted dir). The primary direction was chosen from 12 directions, separated by 30. Right: the direction of each of the two stimuli was randomly chosen from two candidate directions (dir D.2). The subject judged whether the two stimuli moved in the same or different directions. Feedback was provided. The motion discrimination task is described in Fig. 1. In each trial, the subject was presented with two consecutive stimuli, each moving in one of two possible directions (randomly chosen from the two directions dir 2 and dir - 2). The directional difference II between the two stimuli was 8 in the easy condition, and 4 in the difficult condition. The experiment was otherwise identical to that in [2] that used II 3, except that our stimuli were displayed on an SGI computer monitor. II 8 was chosen as the easy condition because most subjects found it relatively easy to learn, yet still needed substantial training. 2.1 A difficult task We trained subjects extensively in one primary direction with a difficult motion discrimination task ( 4), followed by extensive training in a second primary direction. The two primary directions were sufficiently different so direct trans fer between them was not expected [2] (Fig. 2). Subjects'' initial performance in both directions was comparable, replicating the classical result of stimulus specific learning (no direct transfer). However, all subjects took only half as many train ing sessions to make the same improvement in the second direction. All subjects had extensive practice with the task prior to this experiment, thus the acceleration cannot be simply explained by familiarity. Mechanisms of Generalization in Perceptual Learning 47 Our results show that although perceptual learning did not directly transfer in this difficult task, it did nevertheless generalize to the new direction. The generalization was manifested as 100 increase in the rate of learning in the second direction. It demonstrates that the generalization of learning, as manifested via direct transfer and via increase in learning rate, may be thought of as two extremes of a continuum of possibilities. S.sslon S ... lon Figure 2: Subjects DJ and ZL needed 20 training sessions in the first direction, and nine in the second; subject ZJX needed seven training sessions in the first, and four in the second. The rate of learning (the amount of improvement per session) in the second direction is significantly greater than in the first (t(2) 13.41,p 0.003). We first measured the subjects'' baseline performance in an easy task - the dis crimination of motion directions 8 apart - in 12 primary directions (64 trials each, randomly interleaved). We then trained four subjects in one oblique primary direction (chosen randomly and counter-balanced among subjects) for four sessions, each with 700 trials. Finally, we measured again the subjects'' performance in all directions. Every subject improved in all directions (Fig. 3). The performance of seven control subjects was measured without intermediate training; two more con trol subjects were added who were "trained" with similar motion stimuli but were asked to discriminate a brightness change instead. The control subjects improved as well, but significantly less (!ld'' 0.09 vs. 0.78, Fig. 3). Our results clearly show that training with an easy task in one direction leads to immediate improvement in other directions. Hence the learned skill generalized across motion directions. 3 A computational model We will now adopt a general framework for the analysis of perceptual learning results, using the language of signal detection theory. Our model accounts for the results in this paper by employing the constraint of limited computational resources. The model''s assumptions are as follows. 1. In each trial, each of the two stimuli is represented by a population of measure ments that encode all aspects of the stimulus, in particular, the output of localized direction detectors. The measurements are encoded as a vector. The decision as to whether the two stimuli are the same or not is determined by the difference of the two vectors. 2. Each component of the input measurements is characterized by its sensitivity for the discrimination task, e.g., how well the two motion directions can be dis criminated apart based on this component. The entire population itself is generally divided into two sets: informative - measurements with significant sensitivity, and 48 Z. Liu and D. Weinshall 270 Sltjects Figure 3: Left: Discrimination sensitivity d'' of subject JY who was trained in the primary direction 3000 Middle : d'' of control subject YHL who had no training in between the two measurements. Right: Average d'' (and standard error) for all subjects before and after training. Trained: results for the four trained subjects. Note the substantial improvement between the two measurements. For these subjects, the d'' measured after training is shown separately for the trained direction (middle column) and the remaining directions (right column). Control: results for the nine control subjects. The control subjects improved their performance significantly less than the trained subjects (tld'' uninformative - measurements with null sensitivity. In addition, informative mea surements may vary greatly in their individual sensitivity. When many have high sensitivity, the task is easy. When most have low sensitivity, the task is difficult. We assume that sensitivity changes from one primary direction to the next, but the population of informative measurements remains constant. For example, in our psychophysical task localized directional signals are likely to be in the informative set for any motion direction, though their individual sensitivity will vary based on specific motion directions. On the other hand, local speed signals are never informative and therefore always belong to the uninformative set. 3. Due to limited computational capacity, the system can, at a time, only process a small number of components of the input vector. The decision in a single trial is therefore made based on the magnitude of this sub-vector, which may vary from trial to trial. In each trial the system rates the processed components of the sub-vector according to their sensitivity for the discrimination task. After a sufficient number of trials (enough to estimate all the component sensitivities of the sub-vector), the system identifies the least sensitive component and replaces it in the next trial with a new random component from the input vector. In effect, the system is searching from the input vector a sub-vector that gives rise to the maximal discrimination sensitivity. Therefore the performance of the system is gradually improving, causing learning from session to session in the training direction. 4. After learning in one training direction, the system identifies the sets of in formative and uninformative measurements and include in the informative set any measurement with significant (though possibly low) sensitivity. In the next training direction, only the set of informative measurements is searched. The search becomes more efficient, and hence the acceleration of the learning rate. This accounts for the learning between training directions. We further assume that each stimulus generates a signal that is a vector of N measurements: {Idl'' We also assume that the signal for the discrimination task is the difference between two stimulus measurements: x {Xi}l'' Xi tlli. The Mechanisms of Generalization in Perceptual Learning 49 samedifferent discrimination task is to decide whether x is generated by noise - the null vector 0, or by some distinct signal - the vector S. At time t a measurement vector xt is obtained, which we denote x st if it is the signal S, and xnt otherwise. Assume that each measurement in xt is a normal We measure the sensitivity d'' of each component. Since both the signal and noise are assumed to be normal random variables, the sensitivity of the i-th measurement in the discrimination task is d lJ.lilai. Assuming further that the measurements are independent of each other and of time, then the combined sensitivity of M measurements is d'' JL:l (J.ldai)2. 3.1 Limited resources: an assumption We assume that the system can simultaneously process at most M « N of the original N measurements. Since the sensitivity d of the different measurements varies, the discrimination depends on the combined sensitivity of the particular set of M measurements that are being used. Learning in the first training direction, therefore, leads to the selection of a "good" subset of the measurements, obtained by searching in the measurement space. After searching for the best M measurements for the current training direction, the system divides the measurements into two sets: those with non-negligible sensitivity, and those with practically null sensitivity. This rating is kept for the next training direction, when only the first set is searched. One prediction of this model is that learning rate should not increase with exposure only. In other words, it is necessary for subjects to be exposed to the stimulus and do the same discrimination task for effective inter-directional learning to take place. For example, assume that the system is given N measurements : N 2 motion direction signals and N 2 speed signals. It learns during the first training direction that the N 2 speed signals have null sensitivity for the direction discrimination task, whereas the directional signals have varying (but significant) sensitivity. In the second training direction, the system is given the N measurements whose sensitivity profile is different from that in the first training direction, but still with the property that only the directional signals have any significant sensitivity (Fig. 4b). Based on learning in the first training direction, the system only searches the measurements whose sensitivity in the first training direction was significant, namely , the N 2 directional signals. It ignores the speed signals. N ow the asymptotic performance in the second direction remains unchanged because the most sensitive measurements are within the searched population - they are directional signals. The learning rate, however, doubles since the system searches a space half as large. 3.2 Simulation results To account for the different modes of learning, we make the following assumptions. When the task is easy, many components have high sensitivity d''. When the task is difficult, only a small number of measurements have high d''. Therefore, when the task is easy, a subset of M measurements that give rise to the best performance is found relatively fast. In the extreme, when the task is very easy (e.g., all the mea surements have very high sensitivity), the rate of learning is almost instantaneous and the observed outcome appears to be transfer. On the other hand, when the task is difficult, it takes a long time to find the M measurements that give rise to the best performance, and learning is slow. 50 Z. Liu and D . Weinshall Figure 4: Hypothetical sensitivity profile for a population of measurements of speed and motion direction. Left: First training direction - only the motion direction measure ments have significant sensitivity (d'' above 0.1), with measurements around 450 having the highest d''. Right: Second direction - only the motion direction measurements have significant sensitivity, with measurements around 1350 having the highest d''. The detailed operations of the model are as follows. In the first training direction, the system starts with a random set of M measurements. In each trial and using feedback, the mean and standard deviation of each measurement is computed: J.L:t, ar for the signal and J.Lit, art for the noise. In the next trial, given M measurements x as the signal if 5 0, and noise otherwise. At time T, the worst measurement is identified as argval of mini d, d 21J.Lf - J.LiTI(ar art). It is then replaced randomly from one of the remaining N - M measurements. The learning and decision making then proceed as above for another T iterations. This is repeated until the set of chosen measurements stabilizes. At the end, the decision is made based on the set of M measurements that have the highest sensitivities. Figure 5: Simulated performance (percent correct) as function of time. Left: Difficult condition - the number of measurements with high d is small (4 out of 150); there is no transfer from the first to the second training direction, but the learning rate is increased two-fold. This graph is qualitatively similar to the results shown in the top row of Fig. 2. Right: Easy condition - the number of measurements with high d is large (72 out of 150); there is almost complete transfer from the first to the secQnd training direction. At the very beginning of training in the second direction, based on the measured d in the first direction, the measurement population is labeled as informative - those with d larger than the median value, and uninformative - the remaining measurements. The learning and decision making proceeds as above, while only informative measurements are considered during the search. In the simulation we used N 150 measurements, with M 4. Half of the N measurements (the informative measurements) had significant d. In the second training direction, the sensitivities of the measurements were randomly changed, but only the informative measurements had significant d. By varying the number of measurements with high di in the population of informative measurements, we get the different modes of generalization(Fig. 5). Mechanisms of Generalization in Perceptual Learning 51 4 Discussions In contrast to previous results on the specificity of learning, we broadened the search for generalization beyond traditional transfer. We found that generalization is the rule, rather than an exception. Perceptual learning of motion discrimination generalizes in various forms: as acceleration of learning rate (Exp. 1), as immediate improvement in performance (Exp. 2). Thus we show that perceptual learning is more similar to cognitive learning than previously thought, with both stimulus specificity and generalization as important ingredients. In our scheme, the assumption of the computational resource forced the discrimina tion system to search in the measurement space. The generalization phenomena - transfer and increased learning rate - occur due to improvement in search sensitiv ity from one training direction to the next, as the size of the search space decreases with learning. Our scheme also predicts that learning rate should only improve if the subject both sees the stimulus and does the relevant discrimination task, in agreement with the results in Exp. 1. Importantly, our scheme does not predict transfer per se, but instead a dramatic increase in learning rate that is equivalent to transfer. Our model is qualitative and does not make any concrete quantitative predictions. We would like to emphasize that this is not a handicap of the model. Our goal is to show , qualitatively, that the various generalization phenomena should not surprise us, as they should naturally occur in a generic discrimination system with limited computational resources. Thus we argue that it may be too early to use existing perceptual learning results for the identification of the cortical location of perceptual learning, and the levels at which modifications are taking place. References [1] Ahissar M and Hochstein S. Task difficulty and the specificity of perceptual [2] Ball K and Sekuler R. A specific and enduring improvement in visual motion [3] Fiorentini A and Berardi N. Perceptual learning specific for orientation and [4] Gilbert C D. Early perceptual learning. PNAS, 91:1195-1197, 1994. [5] Karni A and Sagi D. Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. PNAS, 88:4966-4970, 1991. [6] Liu Z. Learning a visual skill that generalizes. Tech. Report, NECI, 1995. [7] Liu Z and Vaina L M. Stimulus specific learning: a consequence of stimulus specific experiments? Perception, 24(supplement):21, 1995. [8] Poggio T, Fahle M, and Edelman S. Fast perceptual learning in visual hyper [9] Ramachandran V S. Learning-like phenomena in stereopsis. Nature, 262:382- [10] Rubin N, Nakayama K, and Shapley R. Abrupt learning and retinal size specificity in illusory-contour perception. Current Biology, 7:461-467,1997.' - 'Introduction Application of mean-field theory to solve the problem of inference in Belief Net works(BNs) is well known [1]. In this paper we will discuss a variational mean-field theory and its application to BNs, sigmoidal BNs in particular. We present a variational derivation of the mean-field theory, proposed by Plefka[2]. The theory will be developed for a stochastic system, consistin of N binary random variables, Si E {O, I}, described by the energy function E(S), and the following Boltzmann Gibbs distribution at a temperature T: The application of this mean-field method to Boltzmann Machines(BMs) is already done [3]. A large class of BN s are described by the following energy function: The application of the mean-field theory for such energy functions is not straight forward and further approximations are needed. We propose a new approximation scheme and discuss its utility for sigmoid networks, which is obtained by substitut- f(x) 1 eX in the above energy function. The paper is organized as follows. In section 2 we present a variational derivation of Plefka''s mean-field theory. In section 3 the theory is extended to sigmoidal belief networks. In section 4 empirical evaluation is done. Concluding remarks are given in section 5. 2 A Variational mean-field theory Plefka,[2] proposed a mean-field theory in the context of spin glasses. This theory can, in principle, yield arbitrarily close approximation to log Z. In this section we present an alternate derivation from a variational viewpoint, see also [4],[5]. Let ''Y be a real parameter that takes values from 0 to 1. Let us define a ''Y dependent partition and distribution function, Note that Zl Z and Pl p. Introducing an external real vector, Blet us rewrite where Z is the partition function associated with the distribution function p-y given Using Jensen''s Inequality, (e-X ) e-(x), we get where Taking logarithms on both sides of (4) we obtain The right hand side is defined as a function of u and ''Y via the following assumption. Invertibility assumption: For each fixed u and ''Y, (5) can be solved for if If the invertibility assumption holds then we can use u as the independent vector (with B dependent on u) and rewrite (6) as where G is as defined in (7) This then gives a variational feel: treat it as an external variable vector and choose it to minimize G for a fixed ''Y. The stationarity conditions of the above minimization problem yield At the minimum point we have the equality G - log Z"(. It is difficult to invert (5) for''Y :I 0, thus making it impossible to write an algebraic expression for G for any nonzero ''Y. At ''Y 0 the inversion is straightforward and one obtains A Taylor series approach is then undertaken around ''Y 0 to build an approximation to G. Define Then G M can be considered as an approximation of G. The stationarity conditions are enforced by setting In this paper we will restrict ourselves to M 2. To do this we need to evaluate the following derivatives where For M 1 we have the standard mean-field approach. The expression for M 2 can be identified with the TAP correction. The term (10) yields the TAP term for BM energy function. 3 Mean-field approximations for BNs The method, as developed in the previous section, is not directly useful for BNs because of the intractability of the partial derivatives at ''Y O. To overcome this problem, we suggest an approximation based on Taylor series expansion. Though in this paper we will be restricting ourselves to sigmoid activation function, this method is applicable to other activation functions also. This method enables cal culation of all the necessary terms required for extending Plefka''s method for BN s. Since, for BN operation T is fixed to 1, T will be dropped from all equations in the rest of the paper. Let us define a new energy function where Since (3 is the important parameter, E((3, S, il, w) will be referred to as E((3) so as to avoid notational clumsiness. We use a Taylor series approximation of E((3) with respect to (3. Let us define If Ee approximates E, then we can write Let us now define the following function The Bi are assumed to be functions of il, (3, ''Y, which are obtained by inverting equations (12) By replacing E by Ee in (15) we obtain Ae where the definition of il is obtained by replacing E by Ee. In view of (14) one can consider Ae as an approximation to A. This observation suggests an approximation The required terms needed in the Taylor expansion of G in ''Y can be approximated The biggest advantage in working with Ae rather than G is that the partial deriva tives of Ae with respect to ''Y at ''Y 0 and (3 1 can be expressed as functions of il. We define (18) Figure 1: Three layer BN (2 x 4 x 6) with top down propagation of beliefs. The activation function was chosen to be sigmoid. In light of the above discussion one can consider G M :::::i a MC j hence the mean-field equations can be stated as In this paper we will restrict ourselves to M 2. The relevant objective functions for a general C is given by All these objective functions can be expressed as a function of u. 4 Experimental results To test the approximation schemes developed in the previous schemes, numerical experiments were conducted. Saul et al.[l] pioneered the application of mean-field theory to BNs. We will refer to their method as the SJJ approach. We compare our schemes with the SJ J approach. Small Networks were chosen so that In Z can be computed by exact enumeration for evaluation purposes. For all the experiments the network topology was fixed to the one shown in figure 1. This choice of the network enables us to compare the results with those of [1]. To compare the performance of our methods with their method we repeated the experiment conducted by them for sigmoid BNs. Ten thousand networks were generated by randomly choosing weight values in [-1,1]. The bottom layer units, or the visible units of each network were instantiated to zero. The likelihood, In Z, was computed by exact enumeration of all the states in the higher two layers. The approximate value of - In Z was computed by a MC j U was computed by solving the fixed point equations obtained from (19). The goodness of approximation scheme was tested by the following measure For a proper comparison we also implemented the SJJ method. The goodness of approximation for the SJ J scheme is evaluated by substituting a MC, in (22) by Lsapprox, for specific formula see [1]. The results are presented in the form of histograms in Figure 2. We also repeated the experiment with weights and () () small weights [-1, 1] large weights [-5,5] Table 1: Mean of for randomly generated sigmoid networks, in different weight ranges. biases taking values between -5 and 5, the results are again presented in the form of histograms in Figure 3. The findings are summarized in the form of means tabulated in Table l. For small weights G12 and the SJJ approach show close results, which was expected. But the improvement achieved by the G22 scheme is remarkable; it gave a mean value of 0.0029 which compares substantially well against the mean value of 0.01139 reported in [6]. The improvement in [6] was achieved by using mixture distribution which requires introduction of extra variational variables; more than 100 extra vari ational variables are needed for a 5 component mixture. This results in substantial increase in the computation costs. On the other hand the extra computational cost for G22 over G12 is marginal. This makes the G22 scheme computationally attractive over the mixture distribution. Figure 2: Histograms for GlO and SJJ scheme for weights taking values in [-1,1], for sigmoid networks. The plot on the left show histograms for for the schemes Gu and G12 They did not have any overlaps; Gu , gives a mean of -0.040 while G12 gives a mean of 0.0155. The middle plot shows the histogram for the SJJ scheme, mean is given by 0.0157.The plot at the extreme right is for the scheme G22 , having Of the three schemes G12 is the most robust and also yields reasonably accurate results. It is outperformed only by G22 in the case of sigmoid networks with low weights. Empirical evidence thus suggests that the choice of a scheme is not straight forward and depends on the activation function and also parameter values. Figure 3: Histograms for the G10 and SJJ schemes for weights taking values in [-5,5] for sigmoid networks. The leftmost histogram shows for G11 scheme having a mean of -0.0440, second from left is for G12 scheme having a mean of 0.0231, and second from right is for SJJ scheme, having a mean of 0.0962. The scheme G22 is at the extreme right with mean -0.0456. 5 Discussion Application of Plefka''s theory to BNs is not straightforward. It requires compu tation of some averages which are not tractable. We presented a scheme in which the BN energy function is approximated by a Taylor series, which gives a tractable approximation to the terms required for Plefka''s method. Various approximation schemes depending on the degree of the Taylor series expansion are derived. Unlike the approach in [1], the schemes discussed here are simpler as they do not introduce extra variational variables. Empirical evaluation on small scale networks shows that the quality of approximations is quite good. For a more detailed discussion of these points see [7]. References [1] Saul, L . K. and Jaakkola, T. and Jordan, M. 1.(1996), Mean field theory for sigmoid belief networks, Journal of Artificial Intelligence Research,4 [2] Plefka, T . (1982), Convergence condition of the TAP equation for the Infinite-ranged Ising glass model,J. Phys. A: Math. Gen.,15 [3] Kappen, H. J and Rodriguez, F. B(1998), Boltzmann machine learning using mean field theory and linear response correction, Advances in Neural Information Process ing Systems 10, (eds.) M. I. Jordan and M. J. Kearns and S. A. Solla, MIT press [4] Georges, A. and Yedidia, J. S.(1991), How to expand around mean-field theory using high temperature expansions,J. Phys. A: Math. Gen., 24 [5] Bhattacharyya, C. and Keerthi, S. S.(2000), Information geometry and Plefka''s mean field theory, J. Phys. A: Math. Gen.,33 [6] Bishop, M. C. and Lawrence, N. and Jaakkola, T. and Jordan, M. 1.(1997), Approxi mating Posterior Distributions in Belief Networks using Mixtures, Advances in Neural Information Processing Systems 10, (eds.) Jordan, M. I. and Kearns, M. J. and Solla, S., MIT press [7] Bhattacharyya, C. and Keerthi, S. S. (1999), Mean field theory for a special class of belief networks, accepted in Journal of Artificial Intelligence Research' - 'Introduction It is known that auditory neurons are tuned for a number of independent feature parameters of simple stimuli including frequency (Merzenich et al., 1973), intensity (Sutter and Schreiner, 1995), amplitude modulation (Schreiner and Urbas, 1988), and Cha racterizing Auditory Cortical Ne urons Using Reverse Co rrelation 125 others. In addition, auditory cortical responses to multiple stimuli can enhance or sup press one another in a time dependent fashion (Brosch and Schreiner, 1997; Phillips and Cynader, 1985; Shamma and Symmes, 1985), and auditory cortical neurons can be highly selective for species-specific vocalizations (Wang et al., 1995; Wollberg and Newman, 1972), suggesting complex acoustic processing by these cells. It is not yet known if these many independent selectivities of auditory cortical neurons reflect a discernible underlying pattern of feature decomposition, as has often been suggested (Merzenich et al., 1985; Schreiner and Mendelson, 1990; Wang et al., 1995). Further, since sustained firing rate responses in the auditory cortex to tonal stimuli are typ ically much lower than visual responses to drifting bars (deCharms and Merzenich, 1996b), it has been suggested that the preferred type of auditory stimulus may still not be known (Nelken et al., 1994). We sought to develop an unbiased method for determining the full feature selectivity of auditory cortical neurons, whatever it might be, in frequency and time based upon reverse correlation. 2 Methods Recordings were made from a chronic array of up to 49 individually placed ultra fine extracellular Iridium microelectrodes, placed in the primary auditory cortex of the adult owl monkey. The electrodes had tip lengths of 10-25microns, which yield impedance values of .5-SMOhm and good isolation of signals from individual neurons or clusters of nearby neurons. We electrochemically activated these tips to add an ultramicroscopic coating of Iridium Oxide, which leaves the tip geometry unchanged, but decreases the tip impedance by more than an order of magnitude, resulting in substantially improved recording signals. These signals are filtered from .3-8kHz, sampled at 20kHz, digitized, and sorted. The stimuli used were a variant of random V lsuII Cortn: Reveree Correlltlon U.lng 2D VI.nl Pltternl In Time SplkeT .. ln. Spltlotemporal Receptive Field Auditory Cortex: Rever.e Correlltlon U.lng 1D Auditory Pltternl (Chordl) In Tim. Spectrotempoul Receptive Field Figure 1: Schematic of stimuli used for reverse correlation. white noise which was designed to allow us to characterize the responses of neurons in time and in frequency. As shown in figure 1, these stimuli are directly analogous to stimuli that have been used previously to characterize the response properties of neurons in the primary visual cortex (Jones and Palmer, 1987; Reid and Alonso, 1995; Reid et al., 1991). In the visual case, stimuli consist of spatial checkerboards that span some portion of the two-dimensional visual field and change pattern with a short sampling interval. In the auditory case, which we have studied here, the stimuli chosen were randomly selected chords, which approximately evenly span a 126 R C. deChann s and M M. Merzenich portion of the one-dimensional receptor surface of the cochlea. These stimuli consist of combinations of pure tones, all with identical phase and all with 5 msec cosine shaped ramps in amplitude when they individually turn on or off. Each chord was created by randomly selecting frequency values from 84 possible values which span 7 octaves from 110Hz to 14080Hz in even semitone steps. The density of tones in each stimulus was 1 tone per octave on average, or 7 tones per chord, but the stimuli were selected stochastically so a given chord could be composed of a variable number of tones of randomly selected frequencies. We have used sampling rates of 10-100 chordssecond, and the data here are from stimuli with 50 chordssecond. Stimuli with random, asynchronous onset times of each tone produce similar results. These stimuli were presented in the open sound field within an acoustical isolation cham ber at 44. 1kHz sampling rate directly from audio compact disk, while the animal sat passively in the sound field or actively performed an auditory discrimination task, receiving occasional juice rewards. The complete characterization set lasted for ten minutes, thereby including 30,000 individual chords. Spike trains were collected from mUltiple sites in the cortex simultaneously during the presentation of our characterization stimulus set, and individually reverse correlated with the times of onset of each of the tonal stimuli. The reverse correlation method computes the number of spikes from a neuron that were detected, on average, during a given time preceding, during, or following a particular tonal stimulus component from our set of chords. These values are presented in spikess for all of the tones in the stimulus set, and for some range of time shifts. This method is somewhat analogous in intention to a method developed earlier for deriving spectrotemporal receptive fields for auditory midbrain neurons (Eggermont et al., 1983), but previous methods have not been effective in the auditory cortex. 3 Results Figure 2 shows the spectrotemporal responses of neurons from four locations in the primary auditory cortex. In each panel, the time in milliseconds between the onset of a particular stimulus component and a neuronal spike is shown along the horizontal axis. Progressively greater negative time shifts indicate progressively longer latencies from the onset of a stimulus component until the neuronal spikes. The frequency of the stimulus component is shown along the vertical axis, in octave spacing from a 110Hz standard, with twelve steps per octave. The brightness corresponds to the average rate of the neuron, in spks, driven by a particular stimulus component . The reverse-correlogram is thus presented as a stimulus triggered spike rate average, analogous to a standard peristimulus time histogram but reversed in time, and is identical to the spectrogram of the estimated optimal stimulus for the cell (a spike triggered stimulus average which would be in units of mean stimulus denSity). A minority of neurons in the primary auditory cortex have spectrotemporal recep tive fields that show only a single region of increased rate, which corresponds to the traditional characteristic frequency of the neuron, and no inhibitory region. We have found that cells of this type (less than 10, not shown) are less common than cells with multimodal receptive field structure. More commonly, neurons have regions of both increased and decreased firing rate relative to their mean rate within their re ceptive fields. For terminological convemence, these will be referred to as excitatory and inhibitory regions, though these changes in rate are not diagnostic of an under lying mechanism. Neurons with receptive fields of this type can serve as detectors of stimulus edges in both frequency space, and in time. The neuron shown in figure 2a has a receptive field structure indicative of lateral inhibition in frequency space. This cell prefers a very narrow range of frequencies, and decreases its firing rate for nearby frequencies, giving the characteristic of a sharply-tuned bandpass filter. This Characterizing Auditory Cortical Neurons Using Reverse Correlation 127 Figure 2: Spectrotemporal receptive fields of neurons in the primary auditory cortex of the awake primate. These receptive fields are computed as described in methods. Receptive field structures read from left to right correspond to a preferred stimulus for the neuron, with light shading indicating more probable stimulus components to evoke a spike, and dark shading indicating less probable components. Receptive fields read from right to left indicate the response of the neuron in time to a particular stimulus component. The colorbars correspond to the average firing rates of the neurons in Hz at a given time preceding, during, or following a particular stimulus component. type of response is the auditory analog of a visual or tactile edge detector with lateral inhibition. Simple cells in the primary visual cortex typically show similar patterns of center excitation along a short linear segment, surrounded by inhibition (Jones and Palmer, 1987;Reid and Alonso, 1995; Reid et al., 1991). The neuron shown in figure 2b shows a decrease in firing rate caused by a stimulus frequency which at a later time causes an increase in rate. This receptive field structure is ideally suited to detect stimulus transients; and can be thought of as a detector of temporal edges. Neurons in the auditory cortex typically prefer this type of stimulus, which is initially soft or silent and later loud. This corresponds to a neuronal response which shows an increase followed by a decrease in firing rate. This is again analogous to neuronal responses in the primary visual cortex, which also typically show a firing rate pat tern to an optimal stimulus of excitation followed by inhibition, and preference for stimulus transients such as when a stimulus is first off and then comes on. The neuron shown in figures 2c shows an example which has complex receptive field structure, with multiple regions. Cells of this type would be indicative of selectiv ity for feature conjunctions or quite complex stimuli, perhaps related to sounds in the animal''s learned environment. Cells with complex receptive field structures are common in the awake auditory cortex, and we are in the process of quantifying the percentages of cells that fit within these different categories. Neurons were observed which respond with increased rate to one frequency range at one time, and a different frequency range at a later time, indicative of selectivity for frequency modulations(Suga, 1965). Regions of decreased firing rate can show similar patterns. The neuron shown in figure 2d is an example of this type. This pattern is strongly analogous to motion energy detectors in the visual system (Adelson and Bergen, 1985), which detect stimuli moving in space, and these cells are selective for changes in frequency. 128 R. C. deCharms and M M. Merzenich 2 octsec 6 octsec 10 octsec 14 octsec 30 octsec 100 octsec 2 octsec 6 octsec 10 octsec 14 octsec 30 octsec 100 octsec Figure 3: Parametric stimulus set used to explore neuronal responses to continuously changing stimulus frequency. Images axe spectrograms of stimuli from left to right in time, and spanning seven octaves of frequency from bottom to top. Each stimulus is one second. Numbers indicate the sweep rate of the stimuli in octaves per second. Based on the responses shown, we wondered whether we could find a more optimal class of stimuli for these neuron, analogous to the use of drifting bars or gratings in the primary visual cortex. We have created auditory stimuli which correspond exactly to the preferred stimulus computed for a paxticulax cell from the cell''s spectrotemporal receptive field (manuscript in prepaxation), and we have also designed a paxametric class of stimuli which are designed to be particularly effective for neurons selective for stimuli of changing amplitude or frequency, which are presented here. The stimuli shown in figure 3 are auditory analogous of visual drifting grating stimuli. The stimuli axe shown as spectrograms, where time is along the horizontal axis, frequency content on an octave scale is along the vertical axis, and brightness corresponds to the intensity of the signal. These stimuli contain frequencies that change in time along an octave frequency scale so that they repeatedly pass approximately linearly through a neurons receptive field, just as a drifting grating would pass repeatedly through the receptive field of a visual neuron. These stimuli axe somewhat analogous to drifting ripple stimuli which have recently been used by Kowalski, et.al. to characterize the linearity of responses of neurons in the anesthetized ferret auditory cortex (Kowalski Neurons in the auditory cortex typically respond to tonal stimuli with a brisk onset response at the stimulus transient, but show sustained rates that axe far smaller than found in the visual or somatosensory systems (deCharms and Merzenich, 1996a). We have found neurons in the awake animal that respond with high firing rates and significant selectivity to the class of moving stimuli shown in figure 3. An outstanding example of this is shown in figure 4. The neuron in this example showed a very high sustained firing rate to the optimal drifting stimulus, as high as 60 Hz for one second. The neuron shown in this example also showed considerable selectivity for stimulus velocity, as well as some selectivity for stimulus direction. 4 Conclusions These stimuli enable us to efficiently quantify the response characteristics of neu rons in the awake primaxy auditory cortex, as well as producing optimal stimuli for particular neurons. The data that we have gathered thus far extend our knowledge about the complex receptive field structure of cells in the primary auditory cortex, Cha racterizing Auditory Cortical Ne urons Using Reverse Correlation 129 2 octsec 6 octsec 10 octIsec 14 octsec 30 octsec 100 octsec -2 octsec -6 octsec -10 octsec -14 octsec -30 octsec -100 octsec Figure 4: Responses of a neuron in the primary auditory cortex of the awake pri mate to example stimuli take form our characterization set, as shown in figure 3. In each panel, the average response rate histogram in spikes per second is shown below rastergrams showing the individual action potentials elicited on,each of twenty trials. and show some considerable analogy with neurons in the primary visual cortex. In addition, they indicate that it is possible to drive auditory cortical cells to high rates of sustained firing, as in the visual cortex. This method will allow a number of future questions to be addressed. Since we have recorded many neurons simultaneously, we are interested in the interactions among large populations of neurons and how these relate to stimuli. We are also recording responses to these stimuli while monkeys are performing cognitive tasks involving attention and learning, and we hope that this will give us insight into the effects on cell selectivity of the context provided by other stimuli, the animal''s behavioral state or awareness of the stimuli, and the animal''s prior learning of stimulus sets. 5 References Adelson EH, Bergen JR (1985) Spatiotemporal energy models for the perception of Brosch M, Schreiner CE (1997) Time course of forward masking tuning curves in cat primary auditory cortex. J Neurophysiol, 77, 923-43. deCharms Re, Merzenich MM (1996a) Primary cortical representation of sounds by the coordination of action-potential timing. Nature, 381, 610-3. deCharms RC , Merzenich MM (1996b) Primary cortical representation of sounds by the coordination of action-potential timing. Nature, 381, 610-613. EggeI1I).ont JJ, Aertsen AM, Johannesma PI (1983) Quantitative characterisation procedure for auditory neurons based on the spectro-temporal receptive field. Hear Hubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional archtecture in the cat''s visual cortex. J. Physiol., 160, 106-154. Jones JP, Palmer LA (1987) The two-dimensional spatial structure of simple receptive 130 R. C. deCharms and M M . Merzenich fields in cat striate cortex. J Neurophysiol, 58, 1187-211. Kowalski N, Depireux DA, Shamma SA (1996a) Analysis of dynamic spectra in ferret primary auditory cortex. I. Characteristics of single-unit responses to moving ripple spectra. J Neurophysiol, 76, 3503-23. Kowalski N, Depireux DA, Shamma SA (1996b) Analysis of dynamic spectra in fer ret primary auditory cortex. II. Prediction of unit responses to arbitrary dynamic spectra. J Neurophysiol, 76, 3524-34. Merzenich MM, Jenkins WM, Middlebrooks JC (1985) Observations and hypotheses on special organizational features of the central auditory nervous system. In: Dy namic Aspects of Neocortical Function. Edited by E. G. a. W. M . C. G. Edelman . Merzenich MM, Knight PL, Roth GL (1973) Cochleotopic organization of primary auditory cortex in the cat. Brain Res, 63, 343-6. Nelken I, Prut Y, Vaadia E, Abeles M (1994) In search of the best stimulus: an optimization procedure for finding efficient stimuli in the cat auditory cortex. Hear Phillips DP, Cynader MS (1985) Some neural mechanisms in the cat''s auditory cortex underlying sensitivity to combined tone and wide-spectrum noise stimuli. Hear Res, Reid RC, Alonso JM (1995) Specificity of monosynaptic connections from thalamus to visual cortex. Nature, 378,281-4. Reid RC, Soodak RE, Shapley RM (1991) Directional selectivity and spatiotemporal structure of receptive fields of simple cells in cat striate cortex. J Neurophysiol, 66, Ringach DL, Hawken MJ, Shapley R (1997) Dynamics of orientation tuning in macaque primary visual cortex. Nature, 387, 281-4. Schreiner CE, Mendelson JR (1990) Functional topography of cat primary auditory cortex: distribution of integrated excitation. J Neurophysiol, 64, 1442-59. Schreiner CE, Urbas JV (1988) Representation of amplitude in the auditory cortex of the cat. II. Comparison between cortical fields. Hear. Res., 32, 49-64. Shamma SA, Symmes D (1985) Patterns of inhibition in auditory cortical cells in awake squirrel monkeys. Hear Res, 19, 1-13. Suga N (1965) Responses of cortical auditory neurones to frequency modulated sounds in echo-locating bats. Nature, 206, 890-l. Sutter ML, Schreiner CE (1995) Topography of intensity tuning in cat primary au ditory cortex: single-neuron versus multiple-neuron recordings. J Neurophysiol, 73, Wang X, Merzenich MM, Beitel R, Schreiner CE (1995) Representation of a species specific vocalization in the primary auditory cortex of the common marmoset: tem poral and spectral characteristics. J Neurophysiol, 74, 2685-706. Wollberg Z, Newman JD (1972) Auditory cortex of squirrel monkey: response pat terns of single cells to species-specific vocalizations. Science, 175, 212-214.' - source_sentence: Enhanced learning efficiency through input redundancy cancellation in neural networks sentences: - 'INTRODUCTION Learning problems involving sequentially structured data cannot be effectively dealt with static models such as feedforward networks. Recurrent networks allow to model complex dynamical systems and can store and retrieve contextual information in a flexible way. Up until the present time, research efforts of supervised learning for recurrent networks have almost exclusively focused on error minimization by gradient descent methods. Although effective for learning short term memories, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the inputoutput sequences span long intervals (Bengio et al., 1994; Mozer, 1992). Previous work on alternative training algorithms (Bengio et al., 1994) could suggest that the root of the problem lies in the essentially discrete nature of the process of storing information for an indefinite amount of time. Thus, a potential solution is to propagate, backward in time, targets in a discrete state space rather than differential error information. Extending previous work (Bengio Frasconi, 1994a), in this paper we propose a statistical approach to target propagation, based on the EM algorithm. We consider a parametric dynamical system with discrete states and we introduce a modular architecture, with subnetworks associated to discrete states. The architecture can be interpreted as a statistical model and can be trained by the EM or generalized EM (GEM) algorithms (Dempster et al., 1977), considering the internal state trajectories as missing data. In this way learning is decoupled into also, ATT Bell Labs, Holmdel, N J 07733 428 Yoshua Bengio, Paolo Frasconi a temporal credit assignment subproblem and a static learning subproblem that consists of fitting parameters to the next-state and output mappings defined by the estimated trajectories. In order to iteratively tune parameters with the EM or GEM algorithms, the system propagates forward and backward a discrete distribution over the n states, resulting in a procedure similar to the Baum- Welch algorithm used to train standard hidden Markov models (HMMs) (Levinson et al., 1983). HMMs however adjust their parameters using unsupervised learning, whereas we use EM in a supervised fashion. Furthermore, the model presented here could be called InputOutput HMM , or IOHMM , because it can be used to learn to map input sequences to output sequences (unlike standard HMMs, which learn the output sequence distribution). This model can also be seen as a recurrent version of the Mixture of Experts architecture (Jacobs et al., 1991), related to the model already proposed in (Cacciatore and Nowlan, 1994). Experiments on artificial tasks (Bengio Frasconi, 1994a) have shown that EM recurrent learning can deal with long term dependencies more effectively than backpropaation through time and other alternative algorithms. However, the model used in (Bengio Frasconi, 1994a) has very limited representational capabilities and can only map an input sequence to a final discrete state. In the present paper we describe an extended architecture that allows to fully exploit both input and output portions of the data, as required by the supervised learning paradigm . In this way , general sequence processing tasks, such as production, classification, or prediction, can be dealt with. 2 THE PROPOSED ARCHITECTURE We consider a discrete state dynamical system based on the following state space description: x - f(x U ) where Ut E R m is the input vector at time t, Yt E R r is the output vector, and Xt E {I, 2, ... , n} is a discrete state. These equations define a generalized Mealy finite state machine, in which inputs and outputs may take on continuous values. In this paper, we consider a probabilistic version of these dynamics, where the current inputs and the current state distribution are used to estimate the state distribution and the output distribution for the next time step. Admissible state transitions will be specified by a directed graph 9 whose vertices correspond to the model ''s states and the set of successors for state j is Sj. Th e system defined by equations (1) can be modeled by the recurrent architecture depicted in Figure l(a). The architecture is composed by a set of state networks N j, j 1 ... n and a set of output networks OJ, j 1 ... n. Each one of the state and output networks is uniquely associated to one of the states,and all networks share the same input Ut . Each state network M has the task of predicting the next state distribution, based on the current input and given that Xt-l j. Similarly, each output network OJ predicts the output of the system, given the current state and input. All the subnetworks are assumed to be static and they are defined by means of smooth mappings Nj (Ut; 9j) and OJ (Ut; 1J j), where 9 j and 1J j are vectors of adjustable parameters (e.g., connection weights). The ranges of the functions N j 0 may be constrained in order to account for the underlying transition graph 9. Each output ''Pij,t of the state subnetwork N j (at time t) is associated to one of the successors i of state j. Thus the last layer of M has as many units as the cardinality of Sj. For convenience of notation, we suppose that ''Pij,t are defined for each i, j 1, ... , n and we impose the condition ''Pij,t 0 for each i not belonging to S j. The softmax function is used in the last layer: ''Pij,t e a,j,t ILlEsj ea lj,t, j 1, ... , n , i E Sj where aij,t are intermediate variables that can be thought of as the An Input Output HMM Architecture current pectod output, given PIlat Input Mquenc. current atilt. dlatrtbutton IOHMM Figure 1: (a): The proposed IOHMM architecture. (b): Bottom: Bayesian network expressing conditional dependencies for an IOHMM; top: Bayesian network for a standard HMM activations of the output units of subnetwork N j. In this way L:71 ''Pij,t 1 Tij,t. The vector ''t E R n represents the internal state of the model and it is computed as a linear combination of the outputs of the state networks, gated by the previously computed internal state: n output of the system 1Jt E R r : Output networks compete to predict the global where 1Jjt E R r is the output of subnetwork OJ. At this level, we do not need to further specify the internal architecture of the state and output subnetworks. Depending on the task, the designer may decide whether to include hidden layers and what activation rule to use for the hidden units. This connectionist architecture can be also interpreted as a probability model. Let us assume a multinomial distribution for the state variable Xt and let us consider ''t, the main variable of the temporal recurrence (2). If we initialize the vector ''0 to positive numbers summing to 1, it can be interpreted as a vector of initial state probabilities. In general, we obtain relation (it P(Xt i I un, having denoted with ui the subsequence of inputs from time 1 to t, inclusively. Equation (2) then has the following probabilistic interpretation: i.e., the subnetworks N j compute transition probabilities conditioned on the input As in neural networks trained to minimize the output squared error, the output 1Jt of this architecture can be interpreted as an expected "position parameter" for the probability distribution of the output Yt. However, in addition to being conditional on an input Ut, this expectation is also conditional on the state Xt, i.e. 430 Yoshua Bengio, Paolo Frasconi 7]t E[Yt I Xt,Ut]. The actual form of the output density, denoted !Y(Yt;7]t), will be chosen according to the task. For example a multinomial distribution is suitable for sequence classification, or for symbolic mutually exclusive outputs. Instead, a Gaussian distribution is adequate for producing continuous outputs. In the first case we use a softmax function at the output of subnetworks OJ; in the second case we use linear output units for the subnetworks O J. In order to reduce the amount of computation, we introduce an independency model among the variables involved in the probabilistic interpretation of the architecture. We shall use a Bayesian network to characterize the probabilistic dependencies among these variables. Specifically, we suppose that the directed acyclic graph 9 depicted at the bottom of Figure 1 b is a Bayesian network for the dependency model associated to the variables u I, xI, YI. One of the most evident consequences of this independency model is that only the previous state and the current input are relevant to determine the next-state. This one-step memory property is analogue to the Markov assumption in hidden Markov models (HMM). In fact, the Bayesian network for HMMs can be obtained by simply removing the Ut nodes and arcs from them (see top of Figure Ib). 3 A SUPERVISED LEARNING ALGORITHM The learning algorithm for the proposed architecture is derived from the maximum likelihood principle. The training data are a set of P pairs of input output sequences (of length Tp): 1) {(uip(p),Yip(p));p 1 ... P}. Let J denote the vector of parameters obtained by collecting all the parameters (Jj and iJi of the architecture. The likelihood function is then given by The output values (used here as targets) may also be specified intermittently. For example, in sequence classification tasks, one may only be interested in the out put YT at the end of each sequence. The modification of the likelihood to account for intermittent targets is straightforward. According to the maximum likelihood principle, the optimal parameters are obtained by maximizing (6). In order to apply EM to our case we begin by noting that the state variables Xt are not ob served. Knowledge of the model''s state trajectories would allow one to decompose the temporal learning problem into 2n static learning subproblems. Indeed, if Xt were known, the probabilities (it would be either 0 or 1 and it would be possible to train each subnetwork separately, without taking into account any temporal de pendency. This observation allows to link EM learning to the target propagation approach discussed in the introduction. Note that if we used a Viterbi-like approxi mation (i.e., considering only the most likely path), we would indeed have 2n static learning problems at each epoch. In order to we derive the learning equations, let us define the complete data as 1)c {(uiP(p),yiP(p),xiP(p));p 1 ... P}. The corresponding complete data l-likelihood is Since lc( J; 1)c) depends on the hidden state variables it cannot be maximized di rectly. The MLE optimization is then solved by introducing the auxiliary function Q(J; 0) and iterating the following two,steps for k 1, 2r ... :, Estimation: Compute Q(J; J) E[lc(J; 1)c) 1), J] Maximization: Update the parameters as 0 t-arg maxJ Q( J; 0) (8) An Input Output HMM Architecture 431 The expectation of (7) can be expressed as where hij,t EIZitzj,t-l I uf, yf; 0J, denoting Zit for an indicator variable 1 if Xt i and 0 otherwise. The hat in (it and hij,t means that these variables are computed using the "old" parameters 0 . In order to compute hij,t we introduce the forward probabilities Qit P(YL Xt i; uD and the backward probabilities f3it p(yf I Xt i, un, that are updated as follows: Each iteration of the EM algorithm requires to maximize Q(0 ; 0). We first consider a simplified case, in which the inputs are quantized (i.e., belonging to a finite alphabet {0"1,"" O"K}) and the subnetworks behave like lookup ta bles addressed by the input symbols O"t, i.e. we interpret each parameter as W i''k P(Xt i I Xt-l j,O"t k). For simplicity, we restrict the analysis to clas sification tasks and we suppose that targets are specified as desired final states for each sequence. Furthermore, no output subnetworks are used in this particular application of the algorithm. In this case we obtain the reestimation formulae: In general, however, if the subnetworks have hidden sigmoidal units, or use a soft max function to constrain their outputs to sum to one, the maximum of Q cannot be found analytically. In these cases we can resort to a GEM algorithm, that sim ply produces an increase in Q, for example by gradient ascent. In this case, the derivatives of Q with respect to the parameters can be easily computed as follows. Let Ojlt be a generic weight in the state subnetwork N j. From equation (9): where the partial derivatives :e;t can be computed using backpropagation. Sim ilarly, denoting with t''Jik a generic weight of the output subnetwork Oi, we have: where ;;:t are also computed using backpropagation. Intuitively, the parameters are updated as if the estimation step of EM had provided targets for the outputs of the 2n subnetworks, for each time t. Although GEM algorithms are also guaranteed to find a local maximum of the likelihood, their convergence may be significantly slower compared to EM. In several experiments we noticed that convergence can be accelerated with stochastic gradient ascent. 432 Yoshua Bengio, Paolo Frasconi 4 COMPARISONS It appears natural to find similarities between the recurrent architecture described so far and standard HMMs (Levinson et al., 1983). The architecture proposed in this paper differs from standard HMMs in two respects: computing style and learning. With IOHMMs, sequences are processed similarly to recurrent networks, e.g., an input sequence can be synchronously transformed into an output sequence. This computing style is real-time and predictions of the outputs are available as the input sequence is being processed. This architecture thus allows one to implement all three fundamental sequence processing tasks: production, prediction, and classification. Finally, transition probabilities in standard HMMs are fixed, i.e. states form a homogeneous Markov chain. In IOHMMs, transition probabilities are conditional on the input and thus depend on time, resulting in an inhomogeneous Markov chain. Consequently, the dynamics of the system (specified by the transition probabilities) are not fixed but are adapted in time depending on the input sequence. The other fundamental difference is in the learning procedure. While interesting for their capabilities of modeling sequential phenomena, a major weakness of stan dard HMMs is their poor discrimination power due to unsupervised learning. An approach that has been found useful to improve discrimination in HMMs is based on maximum mutual information (MMI) training. It has been pointed out that supervised learning and discriminant learning criteria like MMI are actually strictly related (Bridle, 1989). Although the parameter adjusting procedure we have defined is based on MLE, yf is used as desired output in response to the input uf, resulting in discriminant supervised learning. Finally, it is worth mentioning that a number of hybrid approaches have been proposed to integrate connectionist approaches into the HMM frame''Vork. For example in (Bengio et al., 1992) the observations used by the HMM are generated by a feedforward neural network. In (Bourlard and Wellekens, 1990) a feedforward network is used to estimate state probabilities, con ditional to the acoustic sequence. A common feature of these algorithms and the one proposed in this paper is that neural networks are used to extract temporally local information whereas a Markovian system integrates long-term constraints. We can also establish a link between IOHMMs and adaptive mixtures of experts (ME) (Jacobs et al., 1991). Recently, Cacciatore Nowlan (1994) have proposed a recurrent extension to the ME architecture, called mixture of controllers (MC), in which the gating network has feedback connections, thus allowing to take temporal context into account. Our IOHMM architecture can be interpreted as a special case of the MC architecture, in which the set of state subnetworks play the role of a gating network having a modular structure and second order connections. 5 REGULAR GRAMMAR INFERENCE In this section we describe an application of our architecture to the problem of grammatical inference. In this task the learner is presented a set of labeled strings and is requested to infer a set of rules that define a formal language. It can be considered as a prototype for more complex language processing problems. However, even in the "simplest" case, i.e. regular grammars , the task can be proved to be NP-complete (Angluin and Smith, 1983). We report experimental results on a set of regular grammars introduced by Tomita (1982) and afterwards used by other researchers to measure the accuracy of inference methods based on recurrent networks (Giles et al., 1992; Pollack, 1991; Watrous and Kuhn , 1992). We used a scalar output with supervision on the final output YT that was modeled as a Bernoulli variable fy (YT ; 7]T) 7]T (1 - 7] ) l-YT, with YT 0 if the string is rejected and YT 1 if it is accepted. In tbis application we did not apply An Input Output HMM Architecture 433 Table 1: Summary of experimental results on the seven Tomita''s grammars. Grammar Sizes Convergence Accuracies n FSA min Average Worst Best WK Best external inputs to the output networks. This corresponds to modeling a Moore finite state machine . Given the absence of prior knowledge about plausible state paths, we used an ergodic transition graph (i.e., fully connected).In the experiments we measured convergence and generalization performance using different sizes for the recurrent architecture. For each setting we ran 20 trials with different seeds for the initial weights. We considered a trial successful if the trained network was able to correctly label all the training strings. The model size was chosen using a cross-validation criterion based on performance on 20 randomly generated strings of length T ::; 12. For comparison, in Table 1 we also report for each grammar the number of states of the minimal recognizing FSA (Tomita, 1982). We tested the trained networks on a corpus of 213 - 1 binary strings of length T ::; 12. The final results are summarized in Table 1. The column "Convergence" reports the fraction of trials that succeeded to separate the training set. The next three columns report averages and order statistics (worst and best trial) of the fraction of correctly classified strings, measured on the successful trials. For each grammar these results refer to the model size n selected by cross-validation. Generalization was always perfect on grammars 1,4,5 and 6. For each grammar, the best trial also attained perfect generalization. These results compare very favorably to those obtained with second-order networks trained by gradient descent, when using the learning sets proposed by Tomita. For comparison, in the last column of Table 1 we reproduce the results reported by Watrous Kuhn (1992) in the best of five trials. In most of the successful trials the model learned an actual FSA behavior with transition probabilities asymptotically converging either to 0 or to 1. This renders trivial the extraction of the corresponding FSA . Indeed, for grammars 1,4,5, and 6, we found that the trained networks behave exactly like the minimal recognizing FSA . A potential training problem is the presence of local maxima in the likelihood func tion. For example, the number of converged trials for grammars 3, 4, and 5 is quite small and the difficulty of discovering the optimal solution might become a serious restriction for tasks involving a large number of states. In other experiments (Ben gio Frasconi, 1994a), we noticed that restricting the connectivity of the transition graph can significantly help to remove problems of convergence. Of course, this ap proach can be effectively exploited only if some prior knowledge about the state space is available. For example, applications of HMMs to speech recognition always rely on structured topologies. 6 CONCLUSIONS There are still a number of open questions. In particular, the effectiveness of the model on tasks involving large or very large state spaces needs to be carefully eval uated. In (Bengio Frasconi 1994b) we show that learning long term dependencies in these models becomes more difficult as we increase the connectivity of the state 434 Yoshua Bengio, Paolo Frasconi transition graph. However, because transition probabilities of IOHMMs change at each t, they deal better with this problem of long-term dependencies than standard HMMs. Another interesting aspect to be investigated is the capability of the model to successfully perform tasks of sequence production or prediction. For example, interesting tasks that could also be approached are those related to time series modeling and motor control learning. References Angluin, D. and Smith, C. (1983). Inductive inference: Theory and methods. Com Bengio, Y. and Frasconi, P. (1994a). Credit assignment through time: Alternatives to backpropagation. In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information Processing Systems 6. Morgan Kaufmann. Bengio, Y. and Frasconi, P. (1994b). An EM Approach to Learning Sequential Behavior. Tech. Rep. RT-DSI11-94, University of Florence. Bengio, Y., De Mori, R., Flammia, G., and Kompe, R. (1992). Global optimization of a neural network-hidden markov model hybrid. IEEE Transactions on Neural Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Networks, 5(2). Bourlard, H. and Wellekens, C. (1990). Links between hidden markov models and multilayer perceptrons. IEEE Trans. Pattern An. Mach. Intell., 12:1167-1178. Bridle, J. S. (1989). Training stochastic model recognition algorithms as net works can lead to maximum mutual information estimation of parameters. In D .S.Touretzky, ed., NIPS2, pages 211-217. Morgan Kaufmann. Cacciatore, T. W. and Nowlan, S. J. (1994). Mixtures of controllers for jump linear and non-linear plants. In Cowan, J. et. al., editors, Advances in Neural Information Processing Systems 6, San Mateo, CA. Morgan Kaufmann. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc. B,39:1-38. Learning and extracting finite state automata with second-order recurrent neu ral networks. Neural Computation, 4(3):393-405. Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive mixture of local experts. Neural Computation, 3:79-87. Levinson, S. E., Rabiner, L. R., and Sondhi, M. M. (1983). An introduction to the application of the theory of probabilistic functIons of a markov process to automatic speech recognition. Bell System Technical Journal, 64(4):1035-1074. Mozer, M. C. (1992). The induction of multiscale temporal structure. In Moody, J. et. al., eds, NIPS 4 pages 275-282. Morgan Kaufmann. Pollack, J. B. (1991). The induction of dynamical recognizers. Machine Learning, Tomita, M. (1982). Dynamic construction of finite-state automata from examples using hill-climbing. Proc. 4th Cog. Science Con!, pp. 105-108, Ann Arbor MI. Watrous, R. 1. and Kuhn, G. M. (1992). Induction of finite-state languages using second-order recurrent networks. Neural Computation, 4(3):406-414.' - 'INTRODUCTION In many learning control problems, the evaluation used to modify (and thus im prove) control may not be available in terms of the controller''s output: instead, it may be in terms of a spatial transformation of the controller''s output variables (in which case we shall term it as being "distal in space"), or it may be available only several time steps into the future (termed as being "distal in time"). For example, control of a robot arm may be exerted in terms of joint angles, while evaluation may be in terms of the endpoint cartesian coordinates; furthermore, we may only wish to evaluate the endpoint coordinates reached after a certain period of time: the co- Current address: Computation and Neural Systems Program, California Institute of Technology, Pasadena CA. 563 564 Brody ordinates reached at the end of some motion, for instance. In such cases, supervised learning methods are not directly applicable, and other techniques must be used. Here we study one such technique (proposed for cases where the evaluation is distal in both space and time by [Jordan Jacobs 90)), analyse a source of its problems, and propose a simple solution for them which leads to fast, efficient learning. We first describe two methods, and then combine them into the "predictive forward modeling" technique with which we are concerned. 1.1 FORWARD MODELING "Forward Modeling" [Jordan Rumelhart 90] is useful for dealing with evaluations which are distal in space; it involves the construction of a differentiable model to approximate the controller-action - evaluation transformation. Let our controller have internal parameters w, output c, and be evaluated in space e, where e e(c) is an unknown but well-defined transformation. If there is a desired output in space e, called e, we can write an "error" function, that is, an evaluation we wish minimised, and differentiate it w.r.t. the controller''s weights to obtain Using a differentiable controller allows us to obtain the first factor in the second equation, and the third factor is also known; but the second factor is not. However, if we construct a differentiable model (called a ''''forward model") of e(c), then we can obtain an approximation to the second term by differentiating the model, and use this to obtain an estimate of the gradient 8E 8w through equation (1); this can then be used for comparatively fast minimisation of E, and is what is known as "forward modeling". 1.2 PREDICTIVE CRITICS To deal with evaluations which are distal in time, we may use a "critic" network, as in [Barto, Sutton Anderson 83]. For a particular control policy implemented by the controller network, the critic is trained to predict the final evaluation that will be obtained given the current state - using, for example, Sutton''s TD algorithm [Sutton 88]. The estimated final evaluation is then available as soon as we enter a state, and so may in turn be used to improve the control policy. This approach is closely related to dynamic programming [Barto, Sutton Watkins 89]. 1.3 PREDICTIVE FORWARD MODELS While the estimated evaluation we obtain from the critic is no longer distal in time, it may still be distal in space. A natural proposal in such cases, where the evaluation signal is distal both in space and time, is to combine the two techniques described above: use a differentiable model as a predictive critic [Jordan Jacobs 90]. If we know the desired final evaluation, we can then proceed as in equation (1) and obtain the gradient of the error w.r.t. the controller''s weights. Schematically, this would look like figure 1. When using a backprop network for the predictive model, state vector control CONTROLLER NETWORK Fast Learning with Predictive Forward Models 565 predicted evaluation PREDICTIVE MODEL Figure 1: Jordan and Jacobs'' predictive forward modeling architecture. Solid lines indi cate data paths, the dashed line indicates back propagation. we would backpropagate through it, through it''s control input, and then into the controller to modify the controller network. We should note that since predictions make no sense without a particular control policy, and the controller is only modified through the predictive model, both networks must be trained simultaneously. [Jordan Jacobs 90] applied this method to a well-known problem, that of learn ing to balance an inverted pendulum on a movable cart by exerting appropriate horizontal forces on the cart. The same task, without differentiating the critic, was studied in [Barto, Sutton Anderson 83]. There, reinforcement learning methods were used instead to modify the controller''s weights; these perform a search which in some cases may be shown to follow, on average, the gradient of the expected evaluation w.r .t. the network weights. Since differentiating the critic allows this gradient to be found directly, one would expect much faster learning when using the architecture of figure 1. However, Jordan and Jacobs'' results show precisely the opposite: it is surprisingly slow. 2 THE REDUNDANCY PROBLEM We can explain the above surprising result if we consider the fact that the predictive model network has redundant inputs: the control vector c is a function of the state vector; (call this c 17( S)). Let K. and u be the number of components of the control and state vectors, respectively. Instead of drawing its inputs from the entire volume of (K.u)-dimensional input space, the predictor is trained only with inputs which lie on the u-dimensional manifold defined by the relation 17. A way from the manifold the network is free to produce entirely arbitrary outputs. Differentiation of the model will then provide non-arbitrary gradients only for directions tangential to the manifold; this is a condition that the axes of the control dimensions will not, in general, satisfy.l This observation, which concerns any model trained with redundant inputs, is the very simple yet principal point of this paper. One may argue that since the control policy is continually changing, the redundancy picture sketched out here is not in fact accurate: as the controller is modified, many lNote that if it is single-valued, there is no way the manifold can "fold around" to cover all (or most) of the K. (T input space. 566 Brody EVALUATION EVALUATION FUNCTION MODELS CONTROL OUTPUT Figure 2: The evaluation as a function of control action. Curves A,B,C,D represent possible (wrong) estimates of the "real" curve made by the predictive model network. possible control policies are "seen" by the predictor, so creating volume in input space and leading to correct gradients obtained from the predictor. However, the way in which this modification occurs is significant. An argument based on empirical observations will be made to sustain this. Consider the example shown in figure 2. The graph shows what the "real" evaluation at some point in state space is, as a function of a component of the control action taken at that pointj this function is what the predictive network should approximate. Suppose the function implemented by the predictive network initially looks like the curve which crosses the "real" evaluation function at point (a)j suppose also that the current action taken also corresponds to point (a). Here we see a one-dimensional example of the redundancy problem: though the prediction at this point is entirely accurate, the gradient is not. If we wish to minimise the predicted evaluation, we would change the action in the direction of point (b). Examples of point (a) will no longer be presented to the predictive network, so it could quite plausibly modify itself simply so as to look like the estimated evaluation curve "B" which is shown crossing point (b) (a minimal change necessary to continue being correct). Again, the gradient is wrong and minimising the prediction will change the action in the same direction as before, perhaps to point (c)j then to (d), and so on. Eventually, the prediction, though accurate, will have zero gradient, as in curve "D", and no modifications will occur. In practice, we have observed networks "getting stuck" in this fashion. Though the objective was to minimise the evaluation, the system stops "learning" at a point far from optimal. The problem may be solved, as Jordan and Jacobs did, by introducing noise in the controller''s output, thus breaking the redundancy. Unfortunately, this degrades .. [ vector control vector Fast Learning with Predictive Forward Models 567 predicted predicted evaluation CONTROLLER NETWORK INTERMEDIATE (WORLD) MODEL PREDICTIVE MODEL Figure 3: The proposed system architecture. Again, solid lines represent data paths while the dashed line represents backpropagation (or differentiation). signal quality and means that since we are predicting future evaluations, we wish to predict the effects of future noise - a notoriously difficult objective. The predictive network eventually outputs the evaluation''s expectation value, but this can take a 3 USING AN INTERMEDIATE MODEL 3.1 AN EXTRA WORLD MODEL Another way to solve the redundancy problem is through the use of what is here called an "intermediate model": a model of the world the controller is interacting with. That is, if 8(t) represents the state vector at time t, and c(t) the controller output at time t, it is a model of the function 1 where 8(t 1) 1(8(t), c(t)). This model is used as represented schematically in figure 3. It helps in modularising the learning task faced by the predictive model [Chrisley 90], but more interestingly, it need not be trained simultaneously with the controller since its output does not depend on future control policy. Hence, it can be trained separately, with examples drawn from its entire (state x action) input space, providing gradient signals without arbitrary components when differentiated. Once trained, we freeze the intermediate model''s weights and insert it into the system as in figure 3; we then proceed to train the controller and predictive model as before. The predictive model will no longer have redundant inputs when trained either, so it too will provide correct gradient signals. Since all arbitrary components have been eliminated, the speedup expected from using differentiable predictive models should now be obtainable.2 3.2 AN EXAMPLE TASK The intermediate model architecture was tested on the same example task as used by Jordan and Jacobs, that of learning to balance a pole which is attached through a hinge on its lower end to a movable cart. The control action is a real valued-force 2This same architecture was independently proposed in [Werbos 90], but without the explanation as to why the intermediate model is necessary instead of merely desirable. 568 Brody L.arninq trial Figure 4: The evolution of eight different learning networks, using the intermediate model. applied to the cart; the evaluation signal is a "0" while the pole has not fallen over, and the cart hasn''t reached the edge of the finite-sized tracks it is allowed to move on, a "I" when either of these events happens. A trial is then said to have failed, and terminates.3 We count the number of learning trials needed before a controller is able to keep the pole balanced for a significant amount of a time (measured in simulated seconds). Figure 4 shows the evolution of eight networks; most reach balancing solutions within 100 to 300 faiulres. (These successful networks came from a batch of eleven: the other three never reached solutions.) This is 50 to 100 times faster than without the intermediate model, where 5000 to 30000 trials were needed to achieve similar balancing times [Jordan Jacobs 90]. We must now take into account the overhead needed to train the intermediate model. This was done in 200 seconds of simulated time, while training the whole system typically required some 400 seconds-the overhead is small compared to the improvement achieved through the use of the intermediate model. However, off-line training of the intermediate model requires an additional agency to organise the selection and presentation of training examples. In the real world, we would either need some device which could initialise the system at any point in state space, or we would have to train through "flailing": applying random control actions, over many trials, so as to eventually cover all possible states and actions. As the dimensionality of the state representation rises for larger problems, intermediate model training will become more difficult. 3The differential equations which were used as a model of this system may be found in [Barto, Sutton Anderson 83]. The parameters of the simulations were identical to those used in [Jordan Jacobs 90]. Fast Learning with Predictive Forward Models 569 3.3 REMARKS We should note that the need for covering all state space is not merely due to the requirement of training an intermediate model: dynamic-programming based techniques such as the ones mentioned in this paper are guaranteed to lead us to an optimal control solution only if we explore the entire state space during learning. This is due to their generality, since no a priori structure of the state space is assumed. It might be possible to interleave the training of the intermediate model with the training of the controller and predictor networks, so as to achieve both concurrently. High-dimensional problems will still be problematic, but not just due to intermediate model training-the curse of dimensionality is not easily avoided! 4 CONCLUSIONS If we differentiate through a model trained with redundant inputs, we eliminate possible arbitrary components (which are due to the arbitrary mixing of the inputs that the model may use) only if we differentiate tangentially along the manifold defined by the relationship between the inputs. For the architecture presented in [Jordan Jacobs 90], this is problematic, since the axes of the control vector will typically not be tangential to the manifold. Once we take this into account, it is clear why the architecture was not as efficient as expected; and we can introduce an "intermediate" world model to avoid the problems that it had. Using the intermediate model allows us to correctly obtain (through backpropaga tion, or differentiation) a real-valued vector evaluation on the controller''s output. On the example task presented here, this led to a 50 to 100-foid increase in learn ing speed, and suggests a much better scaling-up performance and applicability to real-world problems than simple reinforcement learning, where real-valued outputs are not permitted, and vector control outputs would train very slowly. Acknowledgements Many thanks are due to Richard Rohwer, who supervised the beginning of this project, and to M. I. Jordan and R. Jacobs, who answered questions enlighteningly; thanks are also due to Dr F. Bracho at lIMAS, UNAM, who provided the environ ment for the project''s conclusion. This work was supported by scholarships from CON ACYT in Mexico and from Caltech in the U.S. References [Ackley 88] D. H. Ackley, "Associative Learning via Inhibitory Search", in D. S. Touretzky, ed., Advances in Neural Information Processing Systems 1, Morgan Kaufmann 1989 [Barto, Sutton Anderson 83] A. G. Barto, R. S. Sutton, and C. W. Anderson, "Neuronlike Adaptive Elements that can Solve Difficult Control Problems", IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-13, No.5, [Barto, Sutton Watkins 89] A. G. Barto, R. S. Sutton, and C. J. C. H. Watkins, "Learning and Sequential Decision Making", University of Massachusetts at Amherst COINS Technical Report 89-95, September 1989 [Chrisley 90] R. L. Chrisley, "Cognitive Map Construction and Use: A Parallel Dis tributed Approach", in Touretzky, Elman, Sejnowski, and Hinton, eds., Con nectionist Models: Proceedings of the 1990 Summer School, Morgan Kaufmann [Jordan Jacobs 90] M. I. Jordan and R. A. Jacobs, "Learning to Control an Un stable System with Forward Modeling", in D. S. Touretzky, ed., Advances in Neural Information Processing Systems 2, Morgan Kaufmann 1990 [Jordan Rumelhart 90] M. I. Jordan and D. E. Rumelhart, "Supervised learning with a Distal Teacher" , preprint. [Nguyen Widrow 90] D. Nguyen and B. Widrow, ''''The Truck Backer-Upper: An Example of Self-Learning in Neural Networks", in Miller, Sutton and Werbos, eds., Neural Networks for Control, MIT Press 1990 [Sutton 88] R. S. Sutton, "Learning to Predict by the Methods of Temporal Differ ences", Machine Learning 3: 9-44, 1988 [Werbos 90] P. Werbos, "Architectures for Reinforcement Learning", in Miller, Sut ton and Werbos, eds., Neural Networks for Control, MIT Press 1990' - 'Introduction Kernel machines have recently gained a lot of attention due to the popularisation of the support vector machine (SVM) [13] with a focus on classification and the revival of Gaussian Processes (GP) for regression [15]. Subsequently, SVMs have been modified to handle regression [12] and GPs have been adapted to the problem of classification [8]. Both schemes essentially work in the same function space that is characterised by kernels (SVM) and covariance functions (GP), respectively. While the formal similarity of the two methods is striking the underlying paradigms of inference are very different. The SVM was inspired by results from statisticalPAC learning theory while GPs are usually considered in a Bayesian framework. This ideological clash can be viewed as a continuation in machine learning of the by now classical disagreement between Bayesian and frequentistic statistics. With regard to algorithmics the two schools of thought appear to favour two different methods of learning and predicting: the SVM community - as a consequence of the formulation of the SVM as a quadratic programming problem - focuses on learning as optimisation while the Bayesian community favours sampling schemes based on the Bayesian posterior. Of course there exists a strong relationship between the two ideas, in particular with the Bayesian maximum a posteriori (MAP) estimator being the solution of an optimisation problem. Interestingly, the two viewpoints have recently been reconciled theoretically in the so-called PAC-Bayesian framework [5] that combines the idea of a Bayesian prior with PAC-style performance guarantees and has been the basis of the so far tightest margin bound for SVMs [3]. In practice, optimisation based algorithms have the advantage of a unique, deterministic solution and the availability of the cost function as an indicator for the quality of the solution. In contrast, Bayesian algorithms based on sampling and voting are more flexible and have the so-called "anytime" property, providing a relatively good solution at any point in time. Often, however, they suffer from the computational costs of sampling the Bayesian posterior. In this contribution we review the idea of the Bayes point machine (BPM) as an approximation to Bayesian inference for linear classifiers in kernel space in Section 2. In contrast to the GP viewpoint we do not define a Gaussian prior on the length Ilwllx: of the weight vector. Instead, we only consider weight vectors of length Ilwllx: 1 because it is only the spatial direction of the weight vector that matters for classification. It is then natural to define a uniform prior on the resulting ball shaped hypothesis space. Hence, we determine the centre of mass ("Bayes point") of the resulting posterior that is uniform in version space, i.e. in the zero training error region. While the version space could be sampled using some form of Gibbs sampling (see, e.g. [6] for an overview) or an ergodic dynamic system such as a billiard [4] we suggest to use the perceptron algorithm trained on permutations of the training set for sampling in Section 3. This extremely simple sampling scheme proves to be efficient enough to make the BPM applicable to large data sets. We demonstrate this fact in Section 4 on the well-known MNIST data set containing 60 000 samples of handwritten digits and show how an approximation to the posterior probability of classification provided by the BPM can even be used for test-point rejection leading to a great reduction in generalisation error on the remaining samples. We denote n-tuples by italic bold letters (e.g. x (Xl, ... ,xn )), vectors by roman bold letters (e.g. x), random variables by sans serif font (e.g. X) and vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E and I denote a prob ability measure, the expectation of a random variable and the indicator function, respectively. 2 Bayes Point Machines Let us consider the task of classifying patterns X E X into one of the two classes y E Y {-1, 1} using functions h : X Y from a given set 1t known as the hypothesis space. In this paper we shall only be concerned with linear classifiers: where : X K i is known I as the feature map and has to fixed beforehand. If all that is needed for learning and classification are the inner products (., .)x: in the feature space K, it is convenient to specify only by its inner product function 1 For notational convenience we shall abbreviate cf (x) by x. This should not be confused with the set x of training points. k : X X X -t IR known as the kernel, i.e. For simplicity, let us assume that there exists a classifier2 w E W that labels all This assumption can easily be relaxed by introducing slack variables as done in the soft margin variant of the SVM. Then given a training set z (x, y) of m points Xi together with their classes Yi assigned by hw'' drawn iid from an unknown data distribution P z PYIXP X we can assume the existence of a version space V (z), i.e. the set of all classifiers w E W consistent with z: In a Bayesian spirit we incorporate all of our prior knowledge about w into a prior distribution Pw over W. In the absence of any a priori knowledge we suggest a uniform prior over the spatial direction of weight vectors w. Now, given the training set z we update our prior belief by Bayes'' formula, i.e. ifwEV(Z) { otherwise where the first line follows from the independence and the fact that x has no depen dence on w and the second line follows from (2) and (3). The Bayesian classification of a novel test point x is then given by Bay esz (x) argmaxyEy Pw1zm z ({hw (x) y}) sign (EWlzmz [hw (x)]) Unfortunately, the strategy Bayes z is in general not contained in the set 1-l of classifiers considered beforehand. Since Pw1zmz is only non-zero inside version space, it has been suggested to use the centre of mass w crn as an approximation for Bayes z , i.e. This classifier is called the Bayes point. In a previous work [4] we calculated Wcrn using a first order Markov chain based on a billiard-like algorithm (see also [10]). We entered the version space V (z) using a perceptron algorithm and started play ing billiards in version space V (z) thus creating a sequence of pseudo-random samples Wi due to the chaotic nature of the billiard dynamics. Playing billiards in V (z) is possible because each training point (Xi, Yi) E z defines a hyperplane {w E W I Yi (Xi, w}JC O} W. Hence, the version space is a convex polyhedron on the surface of W. After N bounces of the billiard ball the Bayes point was estimated by 2We synonymously call h E 11. and w E W a classifier because there is a one-to-one correspondence between the two by virtue of (1). Although this algorithm shows excellent generalisation performance when compared to state-of-the art learning algorithms like support vector machines (SVM) [13], its effort scales like 0 (m2 ) and 0 (N . m 2 ) in terms of memory and computational requirements, respectively. 3 Sampling the Version Space Clearly, all we need for estimating the Bayes point (4) is a set of classifiers W drawn uniformly from V (z). In order to save computational resources it might be advan tageous to achieve a uniform sample only approximately. The classical perceptron learning algorithm offers the possibility to obtain up to m! different classifiers in ver sion space simply by learning on different permutations of the training set. Given a permutation II : {I, ... , m} - {I, ... , m} the perceptron algorithm works as follows: 1. Start with Wo 0 and t O. A classical theorem due to Novikoff [7] guarantees the convergence of this procedure and furthermore provides an upper bound on the number t of mistakes needed until convergence. More precisely, if there exists a classifier WSVM with margin then the number of mistakes until convergence - which is an upper bound on the sparsity of the solution - is not more than R2 (x) y;2 (WSVM), where R (x) is the smallest real number such that V x Ex: II (x) II K. :::; R (x). The quantity ''Y (WSVM) is maximised for the solution WSVM found by the SVM, and whenever the SVM is theoretically justified by results from learning theory (see [11, 13]) the ratio d R2 (x) ''Y;2 (WSVM) is considerably less than m, say d« m. Algorithmically, we can benefit from this sparsity by the following "trick": since all we need to store is the m-dimensional vector o. Furthermore, we keep track of the m-dimensional vector 0 of real valued outputs of the current solution at the i-th training point. By definition, in the beginning 0 00. Now, if 0i :::; 0 we update Qi by Qi Yi and update 0 by OJ OJ Yik (Xi, Xj) which requires only m kernel calculations. In summary, the memory requirement of this algorithm is 2m and the number of kernel calculations is not more than dm. As a consequence, the computational requirement of this algorithm is no more than the computational requirement for the evaluation ofthe margin ''Y (WSVM)! We suggest to use this efficient perceptron learning algorithm in order to obtain samples Wi for the computation of the Bayes point by (4). (a) (b) (c) Figure 1: (a) Histogram of generalisation errors (estimated on a test set) using a kernel Gibbs sampler. (b) Histogram of generalisation errors (estimated on a test set) using a kernel perceptron. (c) QQ plot of distributions (a) and (b). The straight line indicates that both distribution are very similar. In order to investigate the usefulness of this approach experimentally, we compared the distribution of generalisation errors of samples obtained by perceptron learning on permuted training sets (as suggested earlier by [14]) with samples obtained by a full Gibbs sampling [2]. For computational reasons, we used only 188 training patterns and 453 test patterns of the classes "I" and "2" from the MNIST data set3 . In Figure 1 (a) and (b) we plotted the distribution over 1000 random samples using Using a quantile-quantile (QQ) plot technique we can compare both distributions in one graph (see Figure 1 (c)). These plots suggest that by simple permutation of the training set we are able to obtain a sample of classifiers exhibiting the same generalisation error distribution as with time-consuming Gibbs sampling. 4 Experimental Results In our large scale experiment we used the full MNIST data set with 60000 training examples and 10000 test examples of 28 x 28 grey value images of handwritten digits. As input vector x we used the 784 dimensional vector of grey values. The images were labelled by one of the ten classes "0" to "I". For each of the ten classes y {O, ... , 9} we ran the perceptron algorithm N 10 times each time labelling all training points of class y by 1 and the remaining training points by -1. On an Ultra Sparc 10 each learning trial took approximately 20 - 30 minutes. For the classification of a test image x we calculated the real-valued output of all 100 different classifiers5 by where we used the kernel k given by (5). (Oi)j refers to the expansion coefficient corresponding to the i-th classifier and the j-th data point. Now, for each of the 3 available at http:wvw .research. att. comryannocrmnist. 4We decided to use this kernel because it showed excellent generalisation performance when using the support vector machine. 5For notational simplicity we assume that the first N classifiers are classifiers for the class "0", the next N for class "1" and so on. rejection rate generalisation error rejection rate Figure 2: Generalisation error as a function of the rejection rate for the MNIST data set. The SVM achieved 1.4 without rejection as compared to 1.46 for the BPM. Note that by rejection based on the real-valued output the generalisation error could be reduced to 0.1 indicating that this measure is related to the probability of misclassification of single test points. ten classes we calculated the real-valued decision of the Bayes point Wy by In a Bayesian spirit, the final decision was carried out by Note that ibp,y (x) [9] can be interpreted as an (unnormalised) approximation of the posterior probability that x is of class y when restricted to the function class (1). In order to test the dependence of the generalisation error on the magnitude max y ibp,y (x) we fixed a certain rejection rate r E [0,1] and rejected the set of r 10000 test points with the smallest value of max y ibp,y (x). The resulting plot is depicted in Figure 2. As can be seen from this plot, even without rejection the Bayes point has excellent generalisation performance6 . Furthermore, rejection based on the real-valued out put ibp (x) turns out to be excellent thus reducing the generalisation error to 0.1. One should also bear in mind that the learning time for this simple algorithm was comparable to that of SVMs. A very advantageous feature of our approach as compared to SVMs are its adjustable time and memory requirements and the "anytime" availability of a solution due to sampling. If the training set grows further and we are not able to spend more time with learning, we can adjust the number N of samples used at the price of slightly worse generalisation error. 5 Conclusion In this paper we have presented an algorithm for approximating the Bayes point by rerunning the classical perceptron algorithm with a permuted training set. Here we 6Note that the best know result on this data set if 1.1 achieved with a polynomial kernel of degree four. Nonetheless, for reason of fairness we compared the results of both algorithms using the same kernel. particularly exploited the sparseness of the solution which must exist whenever the success of the SVM is theoretically justified. The restriction to the zero training error case can be overcome by modifying the kernel as This technique is well known and was already suggested by Vapnik in 1995 (see [1]). Another interesting question raised by our experimental findings is the following: By how much is the distribution of generalisation errors over random samples from version space related to the distribution of generalisation errors of the up to m! different classifiers found by the classical perceptron algorithm? Acknowledgements We would like to thank Bob Williamson for helpful dis cussions and suggestions on earlier drafts. Parts of this work were done during a research stay of both authors at the ANU Canberra. References [1) C. Cortes and V. Vapnik. Support Vector Networks. Machine Learning, 20:273-297, [2) T. Graepel and R. Herbrich. The kernel Gibbs sampler. In Advances in Neural Information System Processing 13, 200l. [3) R. Herbrich and T . Graepel. A PAC-Bayesian margin bound for linear classifiers: Why SVMs work. In Advances in Neural Information System Processing 13, 200l. [4) R. Herbrich, T . Graepel, and C. Campbell. Robust Bayes Point Machines. In Pro [5) D. A. McAliester. Some PAC Bayesian theorems. In Proceedings of the Eleventh An nual Conference on Computational Learning Theory, pages 230-234, Madison, Wis [6) R. M. Neal. Markov chain monte carlo method based on ''slicing'' the density function. Technical report, Department of Statistics, University of Toronto, 1997. TR -9722. [7) A . Novikoff. On convergence proofs for perceptrons. In Report at the Symposium on Mathematical Theory of Automata , pages 24-26, Politechnical Institute Brooklyn, [8) M. Opper and O. Winther . Gaussian processes for classification: Mean field algo rithms. Neural Computation, 12(11), 2000. [9) J. Platt. Probabilities for SV machines. In Advances in Large Margin Classifiers, [10) P. Rujan and M . Marchand . Computing the bayes kernel classifier. In Advances in Large Margin Classifiers, pages 329-348. MIT Press, 2000. [11) J. Shawe-Taylor, P. L . Bartlett, R. C. Williamson, and M . Anthony . Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information [12) A. J. Smola. Learning with Kernels. PhD thesis, Technische Universitat Berlin, 1998. [13) V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [14) T. Watkin. Optimal learning with a neural network. Europhysics Letters, 21:871-877, [15) C. Williams. Prediction with Gaussian Processes: From linear regression to linear prediction and beyond. Technical report, Neural Computing Research Group , Aston' - source_sentence: Mathematical analysis of coarse-coded symbol memories in neural networks sentences: - 'Introduction Measuring ways by which several neurons in the brain participate in a specific computational task can shed light on fundamental neural information processing mechanisms . While it is unlikely that complete information from any macroscopic neural tissue will ever be available, some interesting insight can be obtained from simultaneously recorded cells in the cortex of behaving animals. The question we address in this study is the level of synergy, or the level of cooperation, among brain cells, as determined by the information they provide about the observed behavior of the animal. 1.1 The experimental data We analyze simultaneously recorded units from behaving monkeys during a delayed response behavioral experiment. The data was collected at the high brain function laboratory of the Haddassah Medical School of the Hebrew universitY[l, 2]. In this task the monkey had to remember the location of a visual stimulus and respond by touching that location after a delay of 1-32 sec. Correct responses were rewarded by a drop of juice. In one set of recordings six micro-electrodes were inserted simultaneously to the frontal or prefrontal cortex[l, 3]. In another set of experiments the same behavioral paradigm was used and recording were taken from the striatum - which is the first station in basal ganglia (a sub-cortical ganglia)[2]. The cells recorded in the striatum were the tonically active neurons[2], which are known to be the cholinergic inter-neurons of the striatum. These cells are known to respond to reward. The monkeys were trained to perform the task in two alternating modes , "Go" and "N o-Go" [1]. Both sets of behavioral modes can be detected from the recorded spike trains using several statistical modeling techniques that include Hidden Markov Models (HMM) and Post Stimulus Histograms (PSTH). The details of these detec tion methods are reported elsewhere[4, 5]. For this paper it is important to know that we can significantly detect the correct behavior, for example in the "Go" vs. the "No-Go" correct detection is achieved about 90 of the time, where the random is 50 and the monkey''s average performance is 95 correct on this task. 2 Theoretical background Our measure of synergy level among cells is information theoretic and was recently proposed by Brenner et. aZ. [6] for analysis of spikes generated by a single neuron. This is the first application of this measure to quantify cooperativity among neurons. 2.1 Synergy and redundancy A fundamental quantity in information theory is the mutual information between two random variables X and Y. It is defined as the cross-entropy (Kullbak-Liebler divergence) between the joint distribution of the variables, p(x, y), and the product of the marginal distributions p(x)p(y). As such it measures the statistical depen dence of the variables X and Y. It is symmetric in X and Y and has the following Synergy and Redundancy among Brain Cells of Behaving Monkeys 113 familiar relations to their entropies[7]: When given three random variables X I, X 2 and Y, one can consider the mutual information between the joint variables (XI,X2 ) and the variable Y, I(XI'' X 2; Y) (notice the position of the semicolon), as well as the mutual infor mations I(XI; Y) and I(X2; Y). Similarly, one can consider the mutual informa tion between Xl and X 2 conditioned on a given value of Y y, I(XI; X21y) DKL[P(X I,X2Iy)IP(Xl ly)P(X2Iy)]'' as well as its average, the conditional mutual information, Following Brenner et. al.[6] we define the synergy level of Xl and X2 with respect to the variable Y as with the natural generalization to more than two variables X . This expression can be rewritten in terms of entropies and conditional information as follows: Depends On Y Independent of Y When the variables exhibit positive synergy value, with respect to the variable Y, they jointly provide more information on Y than when considered independently, as expected in synergetic cases. Negative synergy values correspond to redundancy - the variables do not provide independent information about Y. Zero synergy value is obtained when the variables are independent of Y or when there is no change in their dependence when conditioned on Y. We claim that this is a useful measure of cooperativity among neurons, in a given computational task. It is clear from Eq.( 3) that if since in that case L yP(y)Iy(XI;X2) I(XI;X2). In other words, the synergy value is not zero only if the statistical dependence, hence the mutual information between the variables, is affected by the value of Y . It is positive when the mutual information increase, on the average, when conditioned on Y, and negative if this conditional mutual information decrease. Notice that the value of synergy can be both positive and negative since information, unlike entropy, is not sub-additive in the X variables. 114 1. Gat and N Tishby 3 Synergy among neurons Our measure of synergy among the units is based on the ability to detect the behavioral mode from the recorded activity, as we discuss bellow. As discussed above, synergy among neurons is possible only if their statistical dependence change with time. An important case where synergy is not expected is pure "population coding" [8]. In this case the cells are expected to fire independently, each with its own fixed tuning curve. Our synergy value can thus be used to test if the recorded units are indeed participating in a pure population code of this kind, as hypothesized for certain motor cortical activity. Theoretical models of the cortex that clearly predict nonzero synergy include at tractor neural networks (ANN)[9] and synfire chain models(SFC)[3]. Both these models predict changes in the collective activity patterns, as neurons move between attractors in the ANN case, or when different synfire-chains of activity are born or disappear in the SFC case. To the extent that such changes in the collective activity depend on behavior, nonzero synergy values can be detected. It remains an interesting theoretical challenge to estimate the quantitative synergy values for such models and compare it to observed quantities. 3.1 Time-dependent cross correlations In our previous studies[4] we demonstrated, using hidden Markov models of the activity, that the pairwise cross-correlations in the same data can change signifi cantly with time, depending on the underlying collective state of activity. These states, revealed by the hidden Markov model, in turn depend on the behavior and enable its prediction. Dramatic and fast changes in the cross-correlation of cells has also been shown by others[lO]. This finding indicate directly that the statistical dependence of the neurons can change (rapidly) with time, in a way correlated to behavior. This clearly suggests that nonzero synergy should be observed among these cortical units, relative to this behavior. In the present study this theoretical hypothesis is verified. 3.2 Redundancy cases If on the other hand the conditioned mutual information equal zero for all behavioral modes, i.e. Iy(Xl; X2) 0 Vy E Y , while I(Xl; X 2) 0, we expect to get negative synergy, or redundancy among the cells, with respect to the behavior variable Y. We observed clear redundancy in another part of the brain, the basal ganglia, dur ing the same experiment, when the behavior was the pre-reward and post-reward activity. In this case different cells provide exactly the same information, which yields negative synergy values. 4 Experimental results 4.1 Synergy measurement in practice To evaluate the synergy value among different cells, it is necessary to estimate the conditional distribution p(ylx) where y is the current behavior and x represent a single trial of spike trains of the considered cells. Estimating this probability, Synergy and Redundancy among Brain Cells of Behaving Monkeys 115 however, requires an underlying statistical model, or a represented of the spike trains. Otherwise there is never enough data since cortical spike trains are never exactly reproducible. In this work we choose the rate representation, which is the simplest to evaluate. The estimation of p(ylx) goes as follows: For each of the M behavioral modes (Y1, Y2 .. , YM) collect spike train samples (the tmining data set). Using the training sample, construct a Post Stimulus Time Histogram (PSTH), i.e. the rate as function of time, for each behavioral mode. Given a spike train, outside of the training set, compute its probability to be result in each of the M modes. The spike train considered correctly classified if the most probable mode is in fact the true behavioral mode, and incorrectly otherwise. The fraction of correct classification, for all spike trains of a given behavioral mode Yi, is taken as the estimate of P(Yi Ix), and denoted pc., where Ci 1S the identity of the cells used in the computation. For the case of only two categories of behavior and for a uniform distribution of the different categories, the value of the entropy H(Y) is the same for all combinations of cells, and is simply H (Y) - Ly p(y) log2 (p(y)) log22 1. The full expression (in bits) for the synergy value can be thus written as follows: If the first expression is larger than the second than there is (positive) synergy and vice versa for redundancy. However there is one very important caveat. As we saw the computation of the mutual information is not done exactly, and what one really computes is only a lower bound . If the bound is tighter for multiple cell calculation, the method could falsely infer positive synergy, and if the bound is tighter for the single cell computation, the method could falsely infer negative synergy. In previous works we have shown that the method we use for this estimation is quite reasonable and robust[5], therefore, we believe that we have even a conservative (i.e. less positive) estimate of synergy. 4.2 Observed synergy values In the first set of experiments we tried to detect the behavioral mode during the delay-period of correct trials. In this case the two types of behavior were the "Go" and the "No-Go" described in the introduction. An example of this detection problem is given in figure lAo In this figure there are 100 examples of multi-electrode recording of spike trains during the delay period. On the left is the "Go-mode" data and on the right the "No-Go mode", for two cells. On the lower part there is an example of two single spike trains that need to be classified by the mode models. 116 I. Gat and N. Tishby Figure 1: Raster displays of simultaneously recorded cells in the 2 different areas, in each area there were 2 behavioral modes. Table 1 gives some examples of detection results obtained by using 2 cells indepen dently, and by using their joint combination. It can be seen that the synergy is positive and significant. We examined 19 recording session of the same behavioral modes for two different animals and evaluated the synergy value. In 18 out of the 19 sessions there was at least one example of significant positive synergy among the cells. For comparison we analyzed another set of experiments in which the data was recorded from the striatum in the basal ganglia. An example for this detection is shown in figure lB. The behavioral modes were the "pre-reward" vs. the "post reward" periods. Nine recording sessions for the two different monkeys were exam ined using the same detection technique. Although the detection results improve when the number of cells increase, in none of these recordings a positive synergy value was found. For most of the data the synergy value was close to zero, i.e. the mutual information among two cells jointly was close to the sum of the mutual infor mation of the independent cells, as expected when the cells exhibit (conditionally) independent activity. The prevailing difference between the synergy measurements in the cortex and in the TAN s'' of the basal ganglia is also strengthen by the different mechanisms underlying those cells. The TANs'' are assumed to be globally mediators of information in the striatum, a relatively simple task, whereas the information processed in the frontal cortex in this task is believed to be much more collective and complicated. Here we suggest a first handle for quantitative detection of such different neuronal activities. Acknowledgments Special thanks are due to Moshe Abeles for his encouragement and support, and to William Bialek for suggesting the idea to look for the synergy among cortical cells. We would also like to thank A. Raz, Hagai Bergman, and Eilon Vaadia for sharing their data with us. The research at the Hebrew university was supported in part by a grant from the Unites States Israeli Binational Science Foundation (BSF). Synergy and Redundancy among Brain Cells of Behaving Monkeys 117 Table 1: Examples of synergy among cortical neurons. For each example the mutual information of each cell separately is given together with the mutual information of the pair. In parenthesis the matching detection probability (average over p(ylx)) is also given. The last column gives the percentage of increase from the mutual information of the single cells to the mutual information of the pair. The table gives only those pairs for which the percentage was larger than 20 and the detection rate higher than 60. Session Cells CellI Ce1l2 Both cells Syn () References [1] M. Abeles, E. Vaadia, H. Bergman, Firing patterns of single unit in the pre frontal cortex and neural-networks models., Network 1 (1990). [2] E. Raz , et al Neuronal synchronization of tonically active neurons in the striatum of normal and parkinsonian primates, J. Neurophysiol. 76:2083-2088 [3] M. Abeles, Corticonics, (Cambridge University Press, 1991). [4] I. Gat , N. Tishby and M. Abeles, Hidden Markov modeling of simultaneously recorded cells in the associative cortex of behaving monkeys, Network,8:297-322 [5] I. Gat, N. Tishby, Comparative study of different supervised detection methods of simultaneously recorded spike trains, in preparation. [6] N. Brenner, S.P. Strong, R. Koberle, W. Bialek, and R. de Ruyter van Steveninck, The Economy of Impulses and the Stiffnes of Spike Trains, NEC Research Institute Technical Note (1998). [7] T.M . Cover and J.A. Thomas, Elements of Information Theory., (Wiley NY, [8] A.P. Georgopoulos, A.B. Schwartz, R.E. Kettner, Neuronal Population Coding [9] D.J. Amit, Modeling Brain Function, (Cambridge University Press, 1989). [10] E. Ahissar et al Dependence of Cortical Plasticity on Correlated Activity of Single Neurons and on Behavioral Context, Science, 257:1412-1415 (1992).' - 'Introduction A di8tributed repre8entation is a memory scheme in which each entity (concept, symbol) is represented by a pattern of activity over many units [3]. If each unit participates in the representation of many entities, it is said to be coar8ely tuned, and the memory itself is called a coar8e-coded memory. Coarse-coded memories have been used for storing symbols in several neural network symbol processing models, such as Touretzky and Hinton''s distributed connectionist production system DCPS [8,9], Touretzky''s distributed implementation of linked list structures on a Boltzmann machine, BoltzCONS [10], and St. John and McClelland''s PDP model of case role defaults [6]. In all of these models, memory capacity was mea sured empirically and parameters were adjusted by trial and error to obtain the desired behavior. We are now able to give a mathematical foundation to these experiments by analyzing the relationships among the fundamental memory parameters. There are several paradigms for coarse-coded memories. In a feature-based repre- 8entation, each unit stands for some semantic feature. Binary units can code features with binary values, whereas more complicated units or groups of units are required to code more complicated features, such as multi-valued properties or numerical values from a continuous scale. The units that form the representation of a concept define an intersection of features that constitutes that concept. Similarity between concepts composed of binary Ceatures can be measured by the Hamming distance between their representations. In a neural network implementation, relationships between concepts are implemented via connections among the units forming their representations. Certain types of generalization phenomena thereby emerge automatically. A different paradigm is used when representing points in a multidimensional contin uous space [2,3]. Each unit encodes values in some subset of the space. Typically the American Institute of Physics 1988 653 subsets are hypercubes or hyperspheres, but they may be more coarsely tuned along some dimensions than others [1]. The point to be represented is in the subspace formed by the intersection of all active units. AB more units are turned on, the accuracy of the representation improves. The density and degree of overlap of the units'' receptive fields determines the system''s resolution [7]. Yet another paradigm for coarse-coded memories, and the one we will deal with exclusively, does not involve features. Each concept, or symbol, is represented by an arbitrary subset of the units, called its pattern. Unlike in feature-based representations, the units in the pattern bear no relationship to the meaning of the symbol represented. A symbol is stored in memory by turning on all the units in its pattern. A symbol is deemed present if all the units in its pattern are active.l The receptive field of each unit is defined as the set of all symbols in whose pattern it participates. We call such memories coarse coded symbol memories (CCSMs). We use the term "symbol" instead of "concept" to emphasize that the internal structure of the entity to be represented is not involved in its representation. In CCSMs, a short Hamming distance between two symbols does not imply semantic similarity, and is in general an undesirable phenomenon. The efficiency with which CCSMs handle sparse memories is the major reason they have been used in many connectionist systems, and hence the major reason for studying them here. The unit-sharing strategy that gives rise to efficient encoding in CCSMs is also the source of their major weakness. Symbols share units with other symbols. AB more symbols are stored, more and more of the units are turned on. At some point, some symbol may be deemed present in memory because all of its units are turned on, even though it was not explicitly stored: a "ghost" is born. Ghosts are an unwanted phenomenon arising out of the overlap among the representations of the various symbols. The emergence of ghosts marks the limits of the system''s capacity: the number of symbols it can store simultaneously and reliably. 2 Definitions and Fundamental Parameters A coarse coded symbol memory in its most general form consists of: A set of N binary state units. An alphabet of Q symbols to be represented. Symbols in this context are atomic entities: they have no constituent structure. A memory scheme, which is a function that maps each symbol to a subset of the units - its pattern. The receptive field of a unit is defined as the set of all symbols to whose pattern it belongs (see Figure 1). The exact nature of the lThis criterion can be generalized by introducing a visibility threshold: a fraction of the pattern that should be on in order for a symbol to be considered present. Our analy sis deals only with a visibility criterion of 100, but can be generalized to accommodate Figure 1: A memory scheme (N 6, Q 8) defined in terms of units Us and symbols 8;. The columns are the symbols'' patterns. The rows are the units'' receptive fieldB. memory scheme mapping determines the properties of the memory, and is the central target of our investigation. As symbols are stored, the memory fills up and ghosts eventually appear. It is not possible to detect a ghost simply by inspecting the contents of memory, since there is no general way of distinguishing a symbol that was stored from one that emerged out of overlaps with other symbols. (It is sometimes possible, however, to conclude that there are no ghosts.) Furthermore, a symbol that emerged as a ghost at one time may not be a ghost at a later time if it was subsequently stored into memory. Thus the definition of a ghost depends not only on the state of the memory but also on its history. Some memory schemes guarantee that no ghost will emerge as long as the number of symbols stored does not exceed some specified limit. In other schemes, the emergence of ghosts is an ever-present possibility, but its probability can be kept arbitrarily low by adjusting other parameters. We analyze systems of both types. First, two more bits of notation need to be introduced: Pghost: Probability of a ghost. The probability that at least one ghost will appear after some number of symbols have been stored. k: Capacity. The maximum number of symbols that can be stored simultaneously before the probability of a ghost exceeds a specified threshold. If the threshold is 0, we say that the capacity is guaranteed. A localist representation, where every symbol is represented by a single unit and every unit is dedicated to the representation of a single symbol, can now be viewed as a special case of coarse-coded memory, where k N Q and Pghost o. Localist representations are well suited for memories that are not sparse. In these cases, coarse coded memories are at a disadvantage. In designing coarse-coded symbol memories we are interested in cases where k « N « Q. The permissible probability for a ghost in these systems should be low enough so that its impact can be ignored. 655 3 Analysis of Four Memory Schemes 3.1 Bounded Overlap (guaranteed capacity) If we want to construct the memory scheme with the largest possible a (given Nand k) while guaranteeing Pghost 0, the problem can be stated formally as: Given a set of size N, find the largest collection of subsets of it such that no union of k such subsets subsumes any other subset in the collection. This is a well known problem in Coding Theory, in slight disguise. Unfortunately, no complete analytical solution is known. We therefore simplify our task and consider only systems in which all symbols are represented by the same number of units (i.e. all patterns are of the same size). In mathematical terms, we restrict ourselves to constant weight codes. The problem then becomes: Given a set of size N, find the largest collection of subsets of size exactly L such that no union of k such subsets subsumes any other subset in the collection. There are no known complete analytical solutions for the size of the largest collection of patterns even when the patterns are of a fixed size. Nor is any efficient procedure for constructing such a collection known. We therefore simplify the problem further. We now restrict our consideration to patterns whose pairwise overlap is bounded by a given number. For a given pattern size L and desired capacity k, we require that no two patterns overlap in more than m units, where: Memory schemes that obey this constraint are guaranteed a capacity of at least k symbols, since any k symbols taken together can overlap at most L - 1 units in the pattern of any other symbol - one unit short of making it a ghost. Based on this constraint, our mathematical problem now becomes: Given a set of size N, find the largest collection of subsets of size exactly L such that the intersection of any two such subsets is of size m (where m is given by equation 1.) Coding theory has yet to produce a complete solution to this problem, but several methods of deriving upper bounds have been proposed (see for example [4]). The simple formula we use here is a variant of the Johnson Bound. Let abo denote the maximum number of symbols attainable in memory schemes that use bounded overlap. Then The Johnson bound is known to be an exact solution asymptotically (that is, when N, L, m - 00 and their ratios remain finite). Since we are free to choose the pattern size, we optimize our memory scheme by maximizing the above expression over all possible values of L. For the parameter sub space we are interested in here (N 1000, k 50) we use numerical approximation to obtain: (Recall that m is a function of Land k.) Thus the upper bound we derived depicts a simple exponential relationship between Q and Nk. Next, we try to construct memory schemes of this type. A Common Lisp program using a modified depth-first search constructed memory schemes for various parameter values, whose Q''S came within 80 to 90 of the upper bound. These results are far from conclusive, however, since only a small portion of the parameter space was tested. In evaluating the viability of this approach, its apparent optimality should be con trasted with two major weaknesses. First, this type of memory scheme is hard to construct computationally. It took our program several minutes of CPU time on a Symbolics 3600 to produce reasonable solutions for cases like N 200, k 5, m 1, with an exponential increase in computing time for larger values of m. Second, if CC SMs are used as models of memory in naturally evolving systems (such as the brain), this approach places too great a burden on developmental mechanisms. The importance of the bounded overlap approach lies mainly in its role as an upper bound for all possible memory schemes, subject to the simplifications made earlier. All schemes with guaranteed capacities can be measured relative to equation 3. 3.2 Random Fixed Size Patterns (a stochastic approach) Randomly produced memory schemes are easy to implement and are attractive because of their naturalness. However, if the patterns of two symbols coincide, the guaranteed capacity will be zero (storing one of these symbols will render the other a ghost). We therefore abandon the goal of guaranteeing a certain capacity, and instead establish a tolerance level for ghosts, Pghost. For large enough memories, where stochastic behavior is more robust, we may expect reasonable capacity even with very small Pghost. In the first stochastic approach we analyze, patterns are randomly selected subsets of a fixed size L. Unlike in the previous approach, choosing k does not bound Q. We may define as many symbols as we wish, although at the cost of increased probability of a ghost (or, alternatively, decreased capacity). The probability of a ghost appearing after k symbols have been stored is given by Equation 4: TN,L(k, e) is the probability that exactly e units will be active after k symbols have been stored. It is defined recursively by Equation 5": We have constructed various coarse-coded memories with random fixed-size receptive fields and measured their capacities. The experimental results show good agreement with the above equation. The optimal pattern size for fixed values of N, k, and a can be determined by binary search on Equation 4, since Pghost(L) has exactly one maximum in the interval [1, N]. However, this may be expensive for large N. A computational shortcut can be achieved by estimating the optimal L and searching in a small interval around it. A good initial estimate is derived by replacing the summation in Equation 4 with a single term involving E[e]: the expected value of the number of active units after k symbols have been stored. The latter can be expressed as: The estimated L is the one that maximizes the following expression: An alternative formula, developed by Joseph Tebelskis, produces very good approx imations to Eq. 4 and is much more efficient to compute. After storing k symbols in memory, the probability P z that a single arbitrary symbol x has become a ghost is given If we now assume that each symbol''s Pz is independent of that of any other symbol, we obtain: This assumption of independence is not strictly true, but the relative error was less than 0.1 for the parameter ranges we considered, when Pghost was no greater than We have constructed the two-dimensional table TN,L(k, c) for a wide range of (N, L) values (70 N 1000, 7 L 43), and produced graphs of the relationships between N, k, a, and Pghost for optimum pattern sizes, as determined by Equation 4. The 658 results show an approximately exponential relationship between a and N k [5]. Thus, for a fixed number of symbols, the capacity is proportional to the number of units. Let arlp denote the maximum number of symbols attainable in memory schemes that use random fixed-size patterns. Some typical relationships, derived from the data, are: 3.3 Random Receptors (a stochastic approach) A second stochastic approach is to have each unit assigned to each symbol with an independent fixed probability s. This method lends itself to easy mathematical analysis, resulting in a closed-form analytical solution. After storing k symbols, the probability that a given unit is active is 1 - (1 - s)k (independent of any other unit). For a given symbol to be a ghost, every unit must either be active or else not belong to that symbol''s pattern. That will happen with a probability [1 - s . (1 - s)k] N, and thus the probability of a ghost is: Assuming Pghost « 1 and k « a (both hold in our case), the expression can be simplified to: from which a can be extracted: We can now optimize by finding the value of s that maximizes a, given any desired upper bound on the expected value of Pghost. This is done straightforwardly by solving BaBs o. Note that 8 N corresponds to L in the previous approach. The solution is s l(k 1), which yields, after some algebraic manipulation: A comparison of the results using the two stochastic approaches reveals an interesting similarity. For large k, with Pghost 0.01 the term 0.468k of Equation 8 can be seen as a numerical approximation to the log term in Equation 11, and the multiplicative factor of 0.0086 in Equation 8 approximates Pghost in Equation 11. This is hardly surprising, since the Law of Large Numbers implies that in the limit (N, k - 00, with 8 fixed) the two methods are equivalent. 659 Finally, it should be. noted that the stochastic approaches we analyzed generate a family of memory schemes, with non-identical ghost-probabilities. Pghost in our formulas is therefore better understood as an expected value, averaged over the entire family. 3.4 Partitioned Binary Coding (a reference point) The last memory scheme we analyze is not strictly distributed. Rather, it is somewhere in between a distributed and a localist representation, and is presented for comparison with the previous results. For a given number of units N and desired capacity k, the units are partitioned into k equal-size "slots," each consisting of N k units (for simplicity we assume that k divides N). Each slot is capable of storing exactly one symbol. The most efficient representation for all possible symbols that may be stored into a slot is to assign them binary codes, using the N k units of each slot as bits. This would allow 2N Jic symbols to be represented. Using binary coding, however, will not give us the required capacity of 1 symbol, since binary patterns subsume one another. For example, storing the code ''10110'' into one of the slots will cause the codes ''10010'', ''10100'' and ''00010'' (as well as several other codes) to become ghosts. A possible solution is to use only half of the bits in each slot for a binary code, and set the other half to the binary complement of that code (we assume that Nk is even). This way, the codes are guaranteed not to subsume one another. Let Qpbc denote the number of symbols representable using a partitioned binary coding scheme. Then, Once again, Q is exponential in N k. The form of the result closely resembles the estimated upper bound on the Bounded Overlap method given in Equation 3. There is also a strong resemblance to Equations 8 and 11, except that the fractional multiplier in front of the exponential, corresponding to Pghost, is missing. Pghost is 0 for the Parti tioned Binary Coding method, but this is enforced by dividing the memory into disjoint sets of units rather than adjusting the patterns to reduce overlap among symbols. As mentioned previously, this memory scheme is not really distributed in the sense used in this paper, since there is no one pattern associated with a symbol. Instead, a symbol is represented by anyone of a set of k patterns, each N k bits long, corresponding to its appearance in one of the k slots. To check whether a symbol is present, all k slots must be examined. To store a new symbol in memory, one must scan the k slots until an empty one is found. Equation 12 should therefore be used only as a point of reference. 4 Measurement of DCPS The three distributed schemes we have studied all use unstructured patterns, the only constraint being that patterns are at least roughly the same size. Imposing more com plex structure on any of these schemes may is likely to reduce the capacity somewhat. In 660 Memory Scheme Result Bounded Overlap Qbo(N, k) eO.367t Random Fixed-size Patterns Q,,!p(Pghost 0.01) 0.0086. e.468 r Random Receptors Q P . eN .1og(k1)"''Tl((kl)"''Tlk"'') ,.,. - ghost Partitioned Binary Coding Qpbc - eO.347r - Table 1 Summary of results for various memory schemes. order to quantify this effect, we measured the memory capacity of DCPS (BoltzCONS uses the same memory scheme) and compared the results with the theoretical models analyzed above. DCPS'' memory scheme is a modified version of the Random Receptors method [5]. The symbol space is the set of all triples over a 25 letter alphabet. Units have fixed-size receptive fields organized as 6 x 6 x 6 subspaces. Patterns are manipulated to minimize the variance in pattern size across symbols. The parameters for DCPS are: N 2000, deviation of 1.5. When Pghost 0.01 the measured capacity was k 48 symbols. By substituting for N in Equation 11 we find that the highest k value for which Q,.,. 15625 is 51. There does not appear to be a significant cost for maintaining structure in the receptive fields. 5 Summary and Discussion Table 1 summarizes the results obtained for the four methods analyzed. Some dif ferences must be emphasiz''ed: Qbo and Qpbc deal with guaranteed capacity, whereas Q,.!p and Q,.,. are meaningful only for Pghost O. Qbo is only an upper bound. Q,.!p is based on numerical estimates. Qpbc is based on a scheme which is not strictly coarse-coded. The similar functional form of all the results, although not surprising, is aesthetically pleasing. Some of the functional dependencies among the various parameters can be derived informally using qualitative arguments. Only a rigorous analysis, however, can provide the definite answers that are needed for a better understanding of these systems and their scaling properties. 661 Acknowledgments We thank Geoffrey Hinton, Noga Alon and Victor Wei for helpful comments, and Joseph Tebelskis for sharing with us his formula for approximating Pghost in the case of fixed pattern sizes. This work was supported by National Science Foundation grants IST-8516330 and EET-8716324, and by the Office of Naval Research under contract number NOOO14-86- K-0678. The first author was supported by a National Science Foundation graduate fellowship. References [1] Ballard, D H. (1986) Cortical connections and parallel processing: structure and function. Behavioral and Brain Sciences 9(1). [2] Feldman, J. A., and Ballard, D. H. (1982) Connectionist models and their proper ties. Cognitive Science 6, pp. 205-254. [3] Hinton, G. E., McClelland, J. L., and Rumelhart, D. E. (1986) Distributed repre sentations. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1. Cambridge, MA: MIT Press. [4] Macwilliams, F.J., and Sloane, N.J.A. (1978). The Theory of Error-Correcting Codes, North-Holland. [5] Rosenfeld, R. and Touretzky, D. S. (1987) Four capacity models for coarse-coded symbol memories. Technical report CMU-CS-87-182, Carnegie Mellon University Computer Science Department, Pittsburgh, PA. [6] St. John, M. F. and McClelland, J. L. (1986) Reconstructive memory for sentences: a PDP approach. Proceedings of the Ohio University Inference Conference. [7] Sullins, J. (1985) Value cell encoding strategies. Technical report TR-165, Com puter Science Department, University of Rochester, Rochester, NY. [8] Touretzky, D. S., and Hinton, G. E. (1985) Symbols among the neurons: details of a connectionist inference architecture. Proceedings of IJCAI-85, Los Angeles, CA, [9] Touretzky, D. S., and Hinton, G. E. (1986) A distributed connectionist produc tion system. Technical report CMU-CS-86-172, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA. [10] Touretzky, D. S. (1986) BoltzCONS: reconciling connectionism with the recursive nature of stacks and trees. Proceedings of the Eighth A nnual Conference of the Cognitive Science Society, Amherst, MA, pp. 522-530.' - 'INTRODUCTION 1.1 THE M.AUTHNER SYSTEM Much is known about the brainstem system that controls fast-start escapes in teleost fish. The most prominent feature of this network is the pair of large Mauthner cells whose axons cross the midline and descend down the spinal cord to synapse on primary motoneurons. The Mauthner system also includes inhibitory neurons, the PHP cells, which have a unique and intense field effect inhibition at the spike initiating zone of the Mauthner cells (Faber and Korn, 1978). The Mauthner system is part of the full brainstem escape network which also includes two pairs of cells homologous to the Mauthner cell and other populations of reticulospinal neurons. With this network fish initiate escapes only from appropriate stimuli, turn away from the offending stimulus, and do so very rapidly with a latency around 15 msec in goldfish. The Mauthner cells play an important role in these functions. Only one 574 Directional Hearing by the Mauthner System 575 fires thus controlling the direction of the initial turn, and it fires very quickly (4-5 msec). They also have high thresholds due to instrinsic membrane properties and the inhibitory inlluence of the PHP cells. (For reviews, see Eaton, et al, 1991 and Faber and Korn, 1978.) Acoustic stimuli are thought to be sufficient to trigger the response (Blader, 1981), both Mauthner cells and PHP cells receive innervation from primary auditory fibers (Faber and Korn, 1978). In addition, the Mauthner cells have been shown physio logically to be very sensitive to acoustic pressure (Canfield and Eaton, 1990). 1.2 LOCALIZING SOUNDS UNDERWATER In contrast to terrestrial vertebrates, there are several reasons for supposing that fish do not use time of arrival or intensity differences between the two ears to localize sounds: underwater sound travels over four times as fast as in air; the fish body provides no acoustic shadow; and fish use a single transducer to sense pressure which is conveyed equally to the two ears. Sound pressure is transduced into vibrations by the swim bladder which, in goldfish, is mechanically linked to the inner ear. Fish are sensitive to an additional component of the acoustic wave, the particle motion. Any particle ofthe medium taking part in the propagation of a longitudenal wave will oscillate about an equilibrium point along the axis of propagation. Fish have roughly the same density as water, and will experience these oscillations. The motion is detected by the bending of sensory hairs on auditory receptor cells by the otolith, an inertial mass suspended above the hair cells. This component of the sound will provide the axis of propagation, but there is a 180 degree ambiguity. Both pressure and particle motion are sensed by hair cells of the inner ear. In goldfish these signals may be nearly segregated. The linkage with the swim bladder impinges primarily on a boney chamber containing two of the endorgans of the inner ear: the saccule and the lagena. The utricle is a third endorgan also thought to mediate some acoustic function, without such direct input from the 3wimbladder. Using both of these components fish can localize sounds. According to the phase model (Schuijf, 1981) fish analyze the phase difference between the pressure com ponent of the sound and the particle displacement component to calculate distance and direction. When pressure is increasing, particles will be pushed in the direc tion of sound propagation, and when pressure is decreasing particles will be pulled back. There will be a phase lag between pressure and particle motion which varies with frequency and distance from the sound source. This, and the separation of the pressure from the displacement signals in the ear of some species pose the greatest problems for theories of sound localization in fish. The acoustically triggered escape in goldfish is a uniquely tractable problem in underwater sound localization. First, there is the fairly good segregation of pressure from particle motion at the sensory level. Second I the escape is very rapid. The decision to turn left or right is equivalent to the firing of one or the other Mauthner cell, and this happens within about 4 msec. With transmission delay, this decision relies only on the initial 2 msec or so of the stimulus. For most salient frequencies, the phase lag will not introduce uncertainty: both the first and second derivatives of particle position and acoustic pressure will be either positive or negative. 576 Guzik and Eaton 1.3 THE XNOR MODEL Active pressure input Left sound source Active displacement input No response Mauthner output Right Mauthner output .. inhibitory 0- excitatory Figure 1 Truth table and minimal network for the XNOR model. Given the above simplification of the problem, we can see that each Mauthner cell must perform a logical operation (Guzik and Eaton, 1993j Eaton et al, 1994). The left Mauthner cell should fire when sounds are located on the left, and this occurs when either pressure is increasing and particle motion is from the left or when pressure is decreasing and particle motion is from the right. We can call displacement from the left positive for the left Mauthner cell, and immediately we Directional Hearing by the Mauthner System 577 have the logical operator exclusive-nor (or XNOR). The right Mauthner cell must solve the same problem with a redefinition of right displacement as positive. The conditions for this logic gate are shown in figure 1A for both Mauthner cells. This analysis simplifies our task of understanding the computational role of individual elements in the system. For example, a minimal network could appear as in figure In this model PHP units perform a logical sub-task of the XNOR as AND gates. This model requires at least two functional classes of PHP units on each side of the brain. These PHP units will be activated for the combinations of pressure and displacement that indicate a sound coming from the wrong direction for the Mauthner cell on that side. Both Mauthner cells are activated by sufficient changes in pressure in either direction, high or low, and will be gated by the PHP cells. This minimal model emerged from explorations of the system using the connectionist paradigm, and inspired us to extend our efforts to a more realistic context. 2 THE NETWORK We used a connectionist model to explore candidate solutions to the leftright dis crimination problem that include the populations of neurons known to exist and include a distributed input resembling the sort available from the hair cells of the inner ear. We were interested in generating a number of alternative solutions to be better prepared to interpret physiological recordings from live goldfish, and to look for variations of, or alternatives to, the XNOR model. 2.1 THE .ARCHITECTURE As shown in figure 2, there are four layers in the connectionist model. The input layer consists of four pools of hair cell units. These represent the sensory neurons of the inner ear. There are two pools on each side: the saccule and the utricle. Treating only the horizontal plane, we have ignored the lagena in this model. The saccule is the organ of pressure sensation and the utricle is treated as the organ of particle motion. Each pool contains 16 hair cell units maximally responsive for displacements of their sensory hairs in one particular direction. They are activated as the eosine of the difference between their preferred direction and the stimulus dellection. All other units use sigmoidal activation functions. The next layer consists of units representing the auditory fibers of the VIIIth nerve. Each pool receives inputs from only one pool of hair cell units, as nerve fibers have not been seen to innervate more than one endorgan. There are 10 units per fiber The fiber units provide input to both the inhibitory PHP units, and to the Mauthner units. There are four pools of PHP units, two on each side of the fish. One set on each side represents the collateral PHP eells, and the other set represents the commissural PHP cells (Faber and Korn, 1978). Both types receive inputs from the auditory fibers. The collaterals project only to the Mauthner cell on the same side. The commissurals project to both Mauthner cells. There are five units per PHP pool. 578 Guzik and Eaton The Mauthner cell units receive inputs from saecular and utricular fibers on their same side only, as well as inputs from a single collateral PHP population and both commissural PHP populations. Left Saccule Left Utricle Right Utricle Right Saccule Hair Cells Auditory Nerve Fiber Pools Left Mauthner Right Mautlll1er Figure 2 The architecture. Weights from the PHP units are all constrained to be negative, while all others are constrained to be positive. The weights are implemented using the function below, positive or negative depending on the polarity of the weight. The function asymptotes to zero for negative values, and to the identity function for values above 2. This function vastly improved learning compared with the simpler, but highly nonlinear exponential function used in earlier versions of the model. 2.2 TRAINING We used a total of 240 training examples. We began with a set of 24 directions for particle motion, evenly distributed around 360 degrees. These each appeared twice, once with increasing pressure and once with decreasing pressure, making a base set of 48 examples. Pressure was introduced as a deflection across saccular hair cells of either 0 degrees for low pressure, or 180 degrees for high pressure. These should be thought of as reflecting the expansion or compression of the swim bladder. Targets for the Mauthner cells were either 0 or 1 depending upon the conditions as described in the XNOR model, in figure lA. Directional Hearing by the Mauthner System 579 N ext by randomly perturbing the activations of the hair cells for these 48 patterns, we generated 144 noisy examples. These were randomly increased or decreased up to 10. An additional 48 examples were generated by dividing the hair cell adivity by two to represent sub-threshold stimuli. These last 48 targets were set to zero. The network was trained in batch mode with backpropagation to minimize a cross entropy error measure, using conjugate gradient search. Unassisted backpropaga tion was unsuccessful at finding solutions. For the eight solutions discussed here, two parameters were varied at the inputs. In some solutions the utride was stimulated with a vedor sum of the displacement and the pressure components, or a "mixed" input. In some solutions the hair cells in the utride are not distributed uniformly, but in a gaussian manner with the mean tuning of 45 degrees to the right or left, in the two ears respedively. This approximates the actual distribution of hair cells in the goldfish utride (Platt, 1977). 3 RESULTS Analyzing the activation of the hidden units as a fundion of input pattern we found activity consistent with known physiology, nothing inconsistent with our knowledge of the system, and some predidions to be evaluated during intracellular recordings from PHP cells and auditory afFerents. First, many PHP cells were found exhibiting a logical fUndion, which is consistent with our minimal model described above. These tended to projed only to one Mauthner cell unit, which suggests that primarily the collateral PHP cells will demonstrate logical properties. Most logical PHP units were NAND gates with very large weights to one Mauthner cell. An example is a unit which is on for all stimuli except those having displacements anywhere on the left when pressure is Second, saccular fibers tended to be either sensitive to high or low pressure, consis tent with recordings of Furukawa and Ishii (1967). In addition there were a dass which looked like threshold fibers, highly active for all supra-threshold stimuli, and inactive for all sub-threshold stimuli. There were some fibers with no obvious se ledivity, as well. Third, utricular fibers often demonstrate sensitivity for displacements exclusively from one side ofthe fish, consistent with our minimal model. Right and left utricular fibers have not yet been demonstrated in the real system. Utricular fibers also demonstrated more coarsely tuned, less interpretable receptive fields. All solutions that included a mixed input to the utrieie, for example, pro duced fibers that seemed to be "not 180 degree" ,or "not 0 degree", countering the pressure vedors. We interpret these fibers as doing dean-up given the absence of negative weights at that layer. Fourth, sub-threshold behavior of units is not always consistent with their supra threshold behavior. At sub-threshold levels of stimulation the adivity of units may not refted their computational role in the behavior. Thus, intracellular recordings should explore stimulus ranges known to elicit the behavior. 580 Guzik and Eaton Fifth, Mauthner units usually receive very strong inputs from pressure fibers. This is consistent with physiological recordings which suggest that the Mauthner cells in goldfish are more sensitive to sound pressure than displacement (Canfield and Sixth, Mauthner cells always acquired rdatively equal high negative biases. This is consistent with the known low input resistance of the real Mauthner eells, giving them a high threshold (Faber and Korn, 1978). Seventh, PHP cells that maintain substantial bilateral connections tend to be ton ically active. These contribute additional negative bias to the Mauthner cells. The relative sizes of the connections are often assymetric. This suggests that the commis sural PHP cells serve primarily to regulate Mauthner threshold, ensure behavioral response only to intense stimuli, consistent with Faber and Korn (1978). These cells could only contribute to a partial solution of the XNOR problem. Eighth, all solutions consistently used logic gate PHP units for only 50 to 75 of the training examples. Probably distributed solutions relying on the direct con nections of auditory nerve fibers to Mauthner cells were more easily learned, and logic gate units only developed to handle the unsolved eases. Cases solved without logic gate units were solved by assymetric projections to the Mauthner cells of one polarity of pressure and one class of direction fibers, left or right. Curiously, most of these eases involved a preferential projection from high pressure fibers to the Mauthner units, along with directional fibers encoding displacements from each Mauthner unit''s positive direction. This means the logic gate units tended to handle the low pressure eases. This may be a result of the presence of the assymetric distributions of utricular hair cells in 6 out of the 8 solutions. 4 CONCLUSIONS Ve have generated predictions for the behavior of neurons in the Mauthner system under different conditions of acoustic stimulation. The predictions generated with our connectionist model are consistent with our interpretation of the phase model for underwater sound localization in fishes as a logical operator. The results are also consistent with previously described properties of the Mauthner system. Though perhaps based on the characteristics more of the training procedure, our solutions suggest that we may find a mixed solution in the fish. Direct projections to the Mauthner cells from the auditory nerve perhaps handle many of the commonly encountered acoustic threats. The results of Blaxter (1981) support the idea that fish do escape from stimuli regardless of the polarity of the initial pressure change. Without significant nonlinear processing at the Mauthner cell itsdf, or more com plex processing in the auditory fibers, direct connections could not handle all of these eases. These possibilities deserve exploration. We propose different computational roles for the two classes of inhibitory PHP neurons. We expect only unilaterally-projecting PHP cells to demonstrate some logical function of pressure and particle motion. We believe that some elements of the Mauthner system must be found to demonstrate such minimal logical functions if the phase modd is an explanation for left-right discrimination by the Mauthner system. Directional Hearing by the Mauthner System 581 We are currently preparing to deliver controlled acoustic stimuli to goldfish during acute intracellular recording procedures from the PHP neurons, the afferent fibers and the Mauthner cells. Our insights from this model will greatly assist us in designing the stimulus regimen, and in interpreting our experimental results. Plans for future computational work are of a dynamic model that will include the results of these physiological investigations, as well as a more realistic version of the Mauthner .Acknowledgements We are grateful for the technical assistance of members of the Boulder Connectionist Research Group, especially Don Mathis for help in debugging and optimizing the original code. We thank P.L. Edds-Walton for crucial discussions. This work was supported by a grant to RCE from the National Institutes of Health (ROI NS22621). References Blader, J.H.S., J.A.B. Gray, and E.J. Denton (1981) Sound and startle responses in herring shoals. J. Mar. BioI. Assoc. UK, 61: 851-869 Canfield, J.G. and R.C. Eaton (1990) Swimbladder acoustic pressure transduction intiates Mauthner-mediated escape. Nature, 37: 760-762 Eaton, R.C., J.G. Canfield and A.L. Guzik (1994) Left-right discrimination of sound onset by the Mauthner system. Brain Behav. Evol., in pre66 Eaton, R.C., R. DiDomenico and J. Nissanov (1991) Role of the Mauthner cell in sensorimotor integration by the brain stem escape network. Brain Behav. Evol., Faber, D.S. and H. Korn (1978) Electrophysiology of the Mauthner cell: Basic properties, synaptic mechanisms and associated networks. In Neurobiology of the Mauthner Cell, D.S. Faber and H. Korn (eds) , Raven Press, NY, pp. 47-131 Fay, R.R.(1984) The goldfish ear codes the axis of acoustic particle motion in three Furukawa, T. and Y. Ishii (1967) Effects of static bending of sensory hairs on sound reception in the goldfish. Japanese J. Physiol., 17: 572-588 Guzik, A.L. and R.C. Eaton (1993) The XNOR model for directional hearing by the Mauthner system. Soc. Neurosci. Abstr. PIaU, C. (1977) Hair cell distribution and orientation in goldfish otolith organs. J. Schuijf, A. (1981) Models of acoustic localization. In Hearing and Sound Commu nication in Fishes, W.N. Tavolga, A.N . Popper and R.R. Fay (eds.), Springer, New' - source_sentence: Effect of input stimulus coding on self-supervised learning performance sentences: - 'INTRODUCTION Formal language learning (Gold, 1969) has been a topic of concern for cognitive science and artificial intelligence. It is the task of inducing a computational description of a formal language from a sequence of positive and negative examples of strings in the target lan guage. Neural information processing approaches to this problem involve the use of recur rent networks that embody the internal state mechanisms underlying automata models (Cleeremans et aI., 1989; Elman, 1990; Pollack, 1991; Giles et aI, 1992; Watrous Kuhn, 1992). Unlike traditional automata-based approaches, learning systems relying on recurrent networks have an additional burden: we are still unsure as to what these networks are doing.Some researchers have assumed that the networks are learning to simulate finite state machines (FSMs) in their state dynamics and have begun to extract FSMs from the net works'' state transition dynamics (Cleeremans et al., 1989; Giles et al., 1992; Watrous Kuhn, 1992). These extraction methods employ various clustering techniques to partition the internal state space of the recurrent network into a finite number of regions correspond ing to the states of a finite state automaton. This assumption of finite state behavior is dangerous on two accounts. First, these extrac tion techniques are based on a discretization of the state space which ignores the basic def inition of information processing state. Second, discretization can give rise to incomplete computational explanations of systems operating over a continuous state space. SENSITIVITY TO INITIAL CONDITIONS In this section, I will demonstrate how sensitivity to initial conditions can confuse an FSM extraction system. The basis of this claim rests upon the definition of information processing state. Information processing (lP) state is the foundation underlying automata theory. Two IP states are the same if and only if they generate the same output responses for all possible future inputs (Hopcroft Ullman, 1979). This definition is the fulcrum for many proofs and techniques, including finite state machine minimization. Any FSM extraction technique should embrace this definition, in fact it grounds the standard FSM minimization methods and the physical system modelling of Crutchfield and Young (Crutchfield Young, 1989). Some dynamical systems exhibit exponential divergence for nearby state vectors, yet remain confined within an attractor. This is known as sensitivity to initial conditions. If this divergent behavior is quantized, it appears as nondeterministic symbol sequences (Crutch field Young, 1989) even though the underlying dynamical system is completely deter ministic (Figure 1). Consider a recurrent network with one output and three recurrent state units. The output unit performs a threshold at zero activation for state unit one. That is, when the activation of the first state unit of the current state is less than zero then the output is A. Otherwise, the output is B. Equation 1 presents a mathematical description. Set) is the current state of the system 0 (t) is the current output. Figure 2 illustrates what happens when you run this network for many iterations. The point in the upper left hand state space is actually a thousand individual points all within a ball of radius 0.01. In one iteration these points migrate down to the lower corner of the state space. Notice that the ball has elongated along one dimension. After ten iterations the orig inal ball shape is no longer visible. After seventeen, the points are beginning to spread along a two dimensional sheet within state space. And by fifty iterations, we see the net work reaching the its full extent of in state space. This behavior is known as sensitivity to initial conditions and is one of three conditions which have been used to characterize cha otic dynamical systems (Devaney, 1989). In short, sensitivity to initial conditions implies Fool''s Gold: Extracting Finite State Machines from Recurrent Network Dynamics 503 Figure 1: Examples of deterministic dynamical systems whose discretize trajectories appear nondeterministic. that any epsilon ball on the attractor of the dynamical will exponentially diverge, yet still be contained within the locus of the attractor. The rate of this divergence is illustrated in Figure 3 where the maximum distance between two points is plotted with respect to the number of iterations. Note the exponential growth before saturation. Saturation occurs as the point cloud envelops the attractor. No matter how small one partitions the state space, sensitivity to initial conditions will eventually force the extracted state to split into multiple trajectories independent of the future input sequence. This is characteristic of a nondeterministic state transition. Unfortu nately, it is very difficult, and probably intractable, to differentiate between a nondetermin istic system with a small number of states or a deterministic with large number of states. In certain cases, however, it is possible to analytically ascertain this distinction (Crutchfield THE OBSERVERS'' PARADOX One response to this problem is to evoke more computationally complex models such as push-down or linear-bounded automata. Unfortunately, the act of quantization can actually introduce both complexion and complexity in the resulting symbol sequence. Pollack and I have focused on a well-hidden problems with the symbol system approach to understand ing the computational powers of physical systems. This work (Kolen Pollack, 1993; S04 Kolen outputA 1 Start (eO.Ol) outputA,B 1 17 iterations outputB 1 1 iteration outputA,B 1 25 iterations outputA 1 10 iterations 50 iterations Figure 2: The state space of a recurrent network whose next state transitions are sensitive to initial conditions. The initial epsilon ball contains 1000 points. These points first straddle the output decision boundary at iteration seven. Kolen Pollack, In press) demonstrated that computational complexity, in terms of Chom sky''s hierarchy of formal languages (Chomsky, 1957; Chomsky, 1965) and Newell and Simon''s physical symbol systems (Newell Simon, 1976), is not intrinsic to physical sys tems. The demonstration below shows how apparently trivial changes in the partitioning of state space can produce symbol sequences from varying complexity classes. Consider a point moving in a circular orbit with a fixed rotational velocity, such as the end of a rotating rod spinning around a fixed center, or imagine watching a white dot on a spin ning bicycle wheel. We measure the location of the dot by periodically sampling the loca tion with a single decision boundary (Figure 4, left side). If the point is to the left of boundary at the time of the sample, we write down an "1". Likewise, we write down an "r" when the point is on the other side. (The probability of the point landing on the boundary is zero and can arbitrarily be assigned to either category without affecting the results below.) In the limit, we will have recorded an infinite sequence of symbols containing long sequences of r''s and l''s. The specific ordering of symbols observed in a long sequence of multiple rotations is Fool''s Gold: Extracting Finite State Machines from Recurrent Network Dynamics 505 Figure 3: Spread of initial points across the attractor as measured by maximum distance. Figure 4: On the left, two decision regions which induce a context free language. 9 is the current angle of rotation. At the time of sampling, if the point is to the left (right) of the dividing line, an 1 (r) is generated. On the right, three decision regions which induce a context sensitive language. dependent upon the initial rotational angle of the system. However, the sequence does pos sess a number of recurring structural regularities, which we call sentences: a run of r''s fol lowed by a run of l''s. For a fixed rotational velocity (rotations per time unit) and sampling rate, the observed system will generate sentences of the form r n1 m (n, m 0). (The notation rn indicates a sequence of n r''s.) For a fixed sampling rate, each rotational velocity spec ifies up to three sentences whose number of r''s and l''s differ by at most one. These sen tences repeat in an arbitrary manner. Thus, a typical subsequence of a rotator which produces sentences r n1 n, r n1 nl ,rn 11 n would look like 506 Kolen A language of sentences may be constructed by examining the families of sentences gener ated by a large collection of individuals, much like a natural language is induced from the abilities of its individual speakers. In this context, a language could be induced from a pop ulation of rotators with different rotational velocities where individuals generate sentences of the form {r"l n, r"l "1 ,r"ll"}, n O. The reSUlting language can be described by a context free grammar and has unbounded dependencies; the number of 1 ''s is a function of the number of preceding r''s. These two constraints on the language imply that the induced language is context free. To show that this complexity class assignment is an artifact of the observational mecha nism, consider the mechanism which reports three disjoint regions: 1, c, and r (Figure 4, right side). Now the same rotating point will generate sequences ofthe form For a fixed sampling rate, each rotational velocity specifies up to seven sentences, rncffil k, when n, m, and k can differ no by no more than one. Again, a language of sentences may be constructed containing all sentences in which the number ofr''s, c''s, and l''s differs by no more than one. The resulting language is context sensitive since it can be described by a context sensitive grammar and cannot be context free as it is the finite union of several context sensitive languages related to r"c"l n. CONCLUSION Using recurrent neural networks as the representation underlying the language learning task has revealed some inherent problems with the concept of this task. While formal languages have mathematical validity, looking for language induction in physical systems is question able, especially if that system operates with continuous internal states. As I have shown, there are two major problems with the extraction of a learned automata from our models. First, sensitivity to initial conditions produces nondeterministic machines whose trajecto ries are specified by both the initial state of the network and the dynamics of the state trans formation. The dynamics provide the shape of the eventual attractor. The initial conditions specify the allowable trajectories toward that attractor. While clustering methods work in the analysis of feed-forward networks because of neighborhood preservation (as each layer is a homeomorphism), they may fail when applied to recurrent network state space trans formations. FSM construction methods which look for single transitions between regions will not help in this case because the network eventually separates initially nearby states across several FSM state regions. The second problem with the extraction of a learned automata from recurrent network is that trivial changes in observation strategies can cause one to induce behavioral descrip tions from a wide range of computational complexity classes for a single system. It is the researcher''s bias which determines that a dynamical system is equivalent to a finite state automata. Fool''s Gold: Extracting Finite State Machines from Recurrent Network Dynamics 507 One response to the first problem described above has been to remove and eliminate the sources of nondeterminism from the mechanisms. Zeng et. a1 (1993) corrected the second order recurrent network model by replacing the continuous internal state transformation with a discrete step function. (The continuous activation remained for training purposes.) This move was justified by their focus on regular language learning, as these languages can be rec ognized by finite state machines. This work is questionable on two points, however. First, tractable algorithms already exist for solving this problem (e.g. Angluin, 1987). Second, they claim that the network is self-clustering the internal states. Self-clustering occurs only at the comers of the state space hypercube because of the discrete activation function, in the same manner as a digital sequential circuit "clusters" its states. Das and Mozer (1994), on the other hand, have relocated the clustering algorithm. Their work focused on recurrent networks that perform internal clustering during training. These networks operate much like competitive learning in feed-forward networks (e.g. Rumelhart and Zipser, 1986) as the dynamics of the learning rules constrain the state representations such that stable clusters emerge. The shortcomings of finite state machine extraction must be understood with respect to the task at hand. The actual dynamics of the network may be inconsequential to the final prod uct if one is using the recurrent network as a pathway for designing a finite state machine. In this engineering situation, the network is thrown away once the FSM is extracted. Neural network training can be viewed as an "interior" method to finding discrete solutions. It is interior in the same sense as linear programming algorithms can be classified as either edge or interior methods. The former follows the edges of the simplex, much like traditional FSM learning algorithms search the space of FSMs. Internal methods, on the other hand, explore search spaces which can embed the target spaces. Linear programming algorithms employing internal methods move through the interior of the defined simplex. Likewise, recurrent neural network learning methods swim through mechanisms with mUltiple finite state interpretations. Some researchers, specifically those discussed above, have begun to bias recurrent network learning to walk the edges (Zeng et al, 1993) or to internally cluster states (Das Mozer, 1994). In order to understand the behavior of recurrent networks, these devices should be regarded as dynamical systems (Kolen, 1994). In particular, most common recurrent networks are actually iterated mappings, nonlinear versions of Barnsley''s iterated function systems (Barnsley, 1988). While automata also fall into this class, they are a specialization of dynamical systems, namely discrete time and state systems. Unfortunately, information processing abstractions are only applicable within this domain and do not make any sense in the broader domains of continuous time or continuous space dynamical systems. Acknowledgments The research reported in this paper has been supported by Office of Naval Research grant number NOOOI4-92-J-1195. I thank all those who have made comments and suggestions for improvement of this paper, especially Greg Saunders and Lee Giles. References Angluin, D. (1987). Learning Regular Sets from Queries and Counterexamples. Information 508 Kolen Barnsley, M. (1988). Fractals Everywhere. Academic Press: San Diego, CA. Chomsky, N. (1957). Syntactic Structures. The Hague: Mounton Co. Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, Mass.: MIT Press. Cleeremans, A, Servan-Schreiber, D. McClelland, J. L. (1989). Finite state automata and simple recurrent networks. Neural Computation, 1,372-381. Crutchfield, J. Young, K. (1989). Computation at the Onset of Chaos. In W. Zurek, (Ed.), Entropy, Complexity, and the Physics of Information. Reading: Addison-Wesely. Das, R. Mozer, M. (1994) A Hybrid Gradient-DescentClustering Technique for Finite State Machine Induction. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, (Eds.), Advances in Neural Information Processing Systems 6. Morgan Kaufman: San Francisco. Devaney, R. L. (1989). An Introduction to Chaotic Dynamical Systems. Addison-Wesley. Elman, J. (1990). Finding structure in time. Cognitive Science, 14, 179-211. and Learning an Unknown Grammar with Recurrent Neural Networks. In John E. Moody, Steven J. Hanson Richard P. Lippman, (Eds.), Advances in Neural Information Processing Systems 4. Morgan Kaufman. Gold, E. M. (1969). Language identification in the limit. Information and Control, 10,372- Hopcroft, J. E. Ullman, J. D. (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesely. Kolen, J. F. (1994) Recurrent Networks: State Machines or Iterated Function Systems? In M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, AS. Weigend (Eds.), Proceedings of the 1993 Connectionist Models Summer School. (pp. 203-210) Hillsdale, NJ: Erlbaum Associates. Kolen, J. F. Pollack, J. B. (1993). The Apparent Computational Complexity of Physical Systems. In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society. Laurence Earlbaum. Kolen, J. F. Pollack, J. B. (In press) The Observers'' Paradox: The Apparent Computational Complexity of Physical Systems. Journal of Experimental and Theoretical Artificial Intelli gence. Pollack, J. B. (1991). The Induction Of Dynamical Recognizers. Machine Learning, 7.227- Newell, A. Simon, H. A (1976). Computer science as empirical inquiry: symbols and search. Communications of the Associationfor Computing Machinery, 19, 113-126. Rumelhart, D. E., and Zipser, D. (1986). Feature Discovery by Competitive Learning. In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, (Eds.), Parallel Distributed Processing. Volume 1. 151-193. MIT Press: Cambridge, MA Watrous, R. L. Kuhn, G. M. (1992). Induction of Finite-State Automata Using Second Order Recurrent Networks. In John E. Moody, Steven J. Hanson Richard P. Lippman, (Eds.), Advances in Neural Information Processing Systems 4. Morgan Kaufman. Zeng, Z., Goodman, R. M., Smyth, P. (1993). Learning Finite State Machines With Self-Clus tering Recurrent Networks. Neural Computation, 5, 976-990 PART IV NEUROSCIENCE' - 'INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control. Consider an agent moving around a finite grid such as the one in figure 1 (the agent is incapable of crossing the barrier) trying to reach a goal whose position it does not know. If it can predict how far away from the goal it is at the current step, and how far away from the goal it is at the next step, after making a move, then it can decide whether or not that move was helpful or harmful. If, in addition, it can record this fact, then it can learn how to navigate to the goal. This generation of actions from predictions is closely related to the mechanism of dynamical programming. TD is used to learn the predictions in the first place. Consider the agent moving around randomly on the grid, receiving a negative reinforcement of -1 for every move it makes apart from moves which take it onto the goal. In this case, if it can estimat.e from every location it visits, how much reinforcement (discounted by how soon it arrives) it will get before it next reaches the goal, it will be predicting how far away it is, based on the random method of selecting actions. TD''s mechanism of learning is to force the predictions to be consistent; the prediction from location a should be -1 more than the average of the predictions from the locations that can be reached in one step (hence the extra -1 reinforcement) from a. 464 Navigating Through Temporal Difference 465 If the agent initially selects each action with the same probability, then the estimate of future reinforcement from a will be monotonically related to how many steps a is away from the goal. This makes the predictions useful for criticising actions as above. In practice, the agent will modify its actions according to this criticism at the same time as learning the predictions based on those actions. Barto, Sutton and Watkins [2] develop this example, and show how the TD mech anism coupled with a punctate representation of the stimulus (referred to as''RBsw below) finds the optimal paths to the goal. ''RBsw ignores the cues shown in figure 1, and devotes one input unit to each location on the grid, which fires if and only if the agent is at that place. TD methods can however work with more general codes. Section 2 considers al ternative representations, including ones that are sensitive to the orientation of the agent as it moves through the grid, and section 3 looks at a restricted form of la. tent learning - what the agent can divine about its environment in the absence of reinforcement. Both techniques can improve the speed of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by which the agent finds out from the environ ment where it is, can be classified along two dimensions; whether they are punctate or distributed, and whether they are directionally sensitive or in register with the world. Over most of the grid, a ''sensible'' distributed representation, such as a coarse-coded one, would be expected to make learning faster, as information about the value and action functions could be shared across adjacent grid points. There are points of discontinuity in the actions, as in the region above the right hand arm of the barrier, but they are few. In his PhD thesis [9], Watkins considers a rather similar problem to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on a CMAC [1] coarse-coded representation of the space. Since his agent moves in a continuous bounded space, rather than being confined merely to discrete grid points, something of this sort is anyway essential. After the initial learning, Watkins arbitrarily makes the agent move ten times more slowly in a closed section of the space. This has a similar effect to the barrier in inducing a discontinuity in the action space. Despite the CMACS forcing the system to share information across such discontinuities, they were able to learn the task quickly. The other dimension over which representations may vary involves the extent to which they are sensitive to the direction in which the agent is facing. This is of interest if the agent must construe its location from the cues around the grid. In this case, rather than moving North, South, East or West, which are actions registered with the world, the agent should only move Ahead, Left or Right (Behind is disabled as an additional constraint), whose effects are also orientation dependent. This, together with the fact that the representation will be less compact (it having a larger input dimensionality) should make learning slower. Dynamical programming and its equivalents are notoriously subject to Bellman''s curse of dimensionality, an engineering equivalent of exponential explosion in search. Table 1 shows four possible representations classified along these two dimensions. 466 Dayan Coarse ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve ''RBSW ''RCMAC Table 1: Representations. ''RBSW is the representation Barto, Sutton and Watkins used. R,x is punctate and directionally sensitive - it devotes four units to every grid point, one of which fires for each possible orientation of the agent. ''RcIAC'' the equivalent of Watkins'' representation, was not simulated, because its capabilities would not differ markedly from those of the mapping-based representation developed in the next section. nA is rather different from the other representations; it provides a test of a represen tation which is more directly associated with the sensory information that might be available directly from the cues. Figure 2 shows how ''RA works. Various identifiable cues, C 1 ... C c (c 7 in the figure) are scattered around the outside of the grid, and the agent has a fictitious ''retina'' which rotates with it. This retina is divided into a number of angular buckets (8 in the figure), and each bucket has c units, the iSh one of which responds if the cue Ci is visible in that bucket. This representation is clearly directionally sensitive (if the agent is facing a different way, then so is its retina, and so no cue will be visible in the same bucket as it was before), and also distributed, since in general more than one cue will be visible from every location. Note that there is no restriction on the number of units that can fire in each bucket at any time - more than one will fire if more than one cue is visible there. Also, under the present system ''RA will in general not work if its coding is ambiguous - grid points must be distinguishable. Finally, it should be clear that ''RA is not biologically plausible. Figure 3 shows the learning curves for the three representations simulated. Each point is generated by switching off the learning temporarily after a certain number of iterations, starting the agent from everywhere in the grid, and averaging how many steps it takes in getting to the goal over and above the minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly, that ''RA is actually better than ''RBSW . This implies that the added advantage of its distributed na ture more than outweighs its disadvantages of having more components and being directionally sensitive. One of the motivations behind studying alternate representations is the experimen tal findings on place cells in the hippocampi of rats (amongst other species). These are cells that fire only when the rat is at a certain location in its environment. Although their existence has led to many hypotheses about rat cognitive mapping (see [5J for a substantial discussion of place cells and mapping), it is important to note that even with a map, there remains the computational1y intensive problem of navigation addressed, in this paper, by TD. ''RA, being closely related to the input stimuli is quite unlike a place cell code - the other representations all bear some similarities. Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems with the TD system as described is that it is incapable oflatent learning in the absence of reinforcement or a goal. If the goal is just taken away, but the -1 reinforcements are still applied at each step, then the values assigned to each location will tend to -00. If both are removed, then although the agent will wander about its environment with random gay abandon, it will not pick up anything that could be used to speed subsequent learning. Latent learning experiments with rats in dry mazes prove fairly conclusively that rats running mazes in the absence of rewards and punishments learn almost as much as rats that are reinforced. One way to solve this problem is suggested by Sutton''s DYNA architecture [7]. Briefly, this constructs a map of place x action - next place, and takes steps in the fictitious world constructed from its map in-between taking steps in the real world, as a way of ironing out the computational ''bumps'' (ie inconsistencies) in the value and action functions. Instead, it is possible to avoid constructing a complete map by altering the repre sentation of the environment used for learning the prediction function and optimal actions. The section on representations concluded that coarse-coded representations are generally better than punctate ones, since information can be shared between neighbouring points. However, not all neighbouring points are amenable to this sharing, because of discontinuities in the value and action functions. If there were a way of generating a coarse coded representation (generally from a punctate one) that is sensitive to the structure of the task, rather than arbitrarily assigned by the environment, it should provide the base for faster learning still. In this case, neighbouring points should only be coded together if they are not separated by the barrier. The initial exploration would allow the agent to learn this much about the structure of the environment. Consider a set of units whose job is to predict the future discounted sum of firings of the raw input lines. Using ''R.Bsw during the initial stage of learning when the act.ions are still random, if the agent is at location (3,3) of the grid, say, then the discounted prediction of how often it will be in (3,4) (ie the frequency with which the single unit representing (3,4) will fire) will be high, since this location is close. However, the prediction for (7,11) will be low, because it is very unlikely to get there quickly. Consider the effect of the barrier: locations on opposite sides of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric on the grid, are far apart in the task. This means that the discounted prediction of how often the agent will be at (1,6) given that it starts at (2,6), will be proportionately lower. Overall, the prediction units should act like a coarse code, sensitive to the struc ture of the task. As required, this information about the environment is entirely independent of whether or not the agent is reinforced during its exploration. In fact, the resulting ''map'' will be more accurate if it is not, as its exploration will be more random. The output of the prediction units is taken as an additional source of information for the value and action functions. Since their main aim is to create intelligently distributed representations from punc tate ones, it is only appropriate to use these prediction units for ''RBsw and ''R4X '' Figure 4 compares average learning curves for ''RBsw with and without these ex-468 Dayan tra mapping units, and with and without 6000 steps of latent learning (LL) in the absence of any reinforcement. A significant improvement is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation! after a few un-reinforced iterations. The predictions are clearly fairly well developed and smooth - a predictable exponentially decaying hump. The only deviations from this are at the barrier and along the edges, where the effects of impermeability and immobility are apparent. Figure 6 shows the same set of predictions but after 2000 reinforced iterations, by which time the agent reaches the goal almost optimally. The predictions degenerate from being roughly radially symmetric (bar the barrier) to being highly asymmetric. Once the agent has learnt how to get to the goal from some location, the path it will follow, and so the locations it will visit from there, is largely fixed. The asymptotic values of the predictions will therefore be 0 for units not on the path, and -( for those on the path, where r is the number of steps since the agent''s start point and ''Y is the discounting factor weighting immediate versus distant reinforcement. This is a severe limitation since it implies that the topological information present in the early stages of learning disappears evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION Navigation comprises two problems; where the agent and the goals in its environ ment are, and how it can get to them. Having some form of cognitive map, as is suggested by the existence of place cells, addresses the first, but leaves open the second. For the case of one goal, the simple TD method described here is one solution. TD planning methods are clearly robust to changes in the way the input stimu lus is represented. Distributed codes, particularly ones that allow for the barrier, make learning faster. This is even true for 1lA'' which is sensitive to the orientation of the agent. All these results require each location to have a unique representa tion - Mozer and Bachrach [4] and Chrisley [3] and references therein look at how ambiguities can be resolved using information on the sequence of states the agent traverses. Since these TD planning methods are totally general, just like dynamical program ming, they are unlikely to scale well. Some evidence for this comes from the rel atively poor performance of 1l.x , with its quadrupled input dimension. This puts the onus back either onto dividing the task into manageable chunks, or onto more sophisticated representation. A cknow ledgements I am very grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw, and the attendees of the PDP Workshop at Edinburgh, the Con nectionist Group at Amherst, and a spatial learning workshop at King''s College Cambridge for their helpful comments. This work was funded by SERC. 1 Note that these are normalised to a maximum value of 10, for graphical convenience. Navigating Through Temporal Difference 469 References [1] Albus, JS (1975). A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC). Transactions of the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227. [2] Barto, AG, Sutton, RS . Watkins, CJCH (1989). Learning and Sequential Decision Making. Technical Report 89-95, Computer and Information Science, University of Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction and use: A parallel dis tributed approach. In DS Touretzky, J Elman, TJ Sejnowski, . GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School. San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, . Bachrach, J (1990). Discovering the structure of a reactive en vironment by exploration. In D Touretzky, editor, Advances in Neurallnfor mation Processing Systems, , pp 439-446. San Mateo, CA: Morgan Kaufmann. [5] O''Keefe, J Nadel, L (1978). The Hippocampus as a Cognitive Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44. [7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic progranuning. In Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kauf [8] Sutton, RS, . Barto, AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel . JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT Press. [9J Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. Univer sity of Cambridge, England. Agall arrier OriCIIlltloD ''Retina'' Anplar bucket Dot rlrina 1. flrina Fig 2: The ''retina'' for 1lA 470 Dayan Average extra steps to goal Learning iterations Fig 3: Different representations Fig 5: Initial predictions from (5,6) Average extra steps to goal Learning iterations Fig 4: Mapping with ''RBSW Fig 6: Predictions after 2000 iterations' - 'Introduction Hand-written digit recognition has become one of the touchstone problems in neural networks recently. Large databases of training examples such as the NIST (National Institute of Standards and Technology) Special Database 3 have become available, and real-world applications with clear practical value, such as recognizing zip codes in letters, have emerged. Diverse architectures with varying learning rules have been proposed, including feed-forward networks (Denker et al. 1989; Ie Cun et al. 1990; Martin and Pittman 1990), self-organizing maps (Allinson et al. 1994), and dedicated approaches such as the neocognitron (Fukushima and Wake 1990). The problem is difficult because handwriting varies a lot, some digits are easily confusable, and recognition must be based on small but crucial differences. For ex ample, the digits 3 and 8, 4 and 9, and 1 and 7 have several overlapping segments, and the differences are often lost in the noise. Thus, hand-written digit recogni tion can be seen as a process of identifying the distinct features and producing an internal representation where the significant differences are magnified, making the recognition easier. Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 737 In this paper, the Laterally Interconnected Synergetically Self-Organizing Map ar chitecture (LISSOM; Sirosh and Miikkulainen 1994, 1995, 1996) was employed to form such a separable representation. The lateral inhibitory connections of the LIS SOM map decorrelate features in the input, retaining only those differences that are the most significant. Using LISSOM as a front end, the actual recognition can be performed by any standard neural network architecture, such as the perceptron. The experiments showed that while direct recognition of the digit bitmaps with a simple percept ron network is successful 72.3 of the time, and recognizing them using a standard self-organizing map (SOM) as the front end 84.1 of the time, the recognition rate is 88.1 based on the LISSOM network. These results suggest that LISSOM can serve as an effective front end for real-world handwritten character recognition systems. 2 The Recognition System 2.1 Overall architecture The system consists of two networks: a 20 x 20 LISSOM map performs the feature analysis and decorrelation of the input, and a single layer of 10 perceptrons the final recognition (Figure 1 (a)). The input digit is represented as a bitmap on the 32 x 32 input layer. Each LISSOM unit is fully connected to the input layer through the af ferent connections, and to the other units in the map through lateral excitatory and inhibitory connections (Figure 1 (b)). The excitatory connections are short range, connecting only to the closest neighbors of the unit, but the inhibitory connections cover the whole map . The percept ron layer consists of 10 units, corresponding to digits 0 to 9. The perceptrons are fully connected to the LISSOM map, receiv ing the full activation pattern on the map as their input. The perceptron weights are learned through the delta rule, and the LISSOM afferent and lateral weights through Hebbian learning. 2.2 LISSOM Activity Generation and Weight Adaptation The afferent and lateral weights in LISSOM are learned through Hebbian adapta tion. A bitmap image is presented to the input layer, and the initial activity of the map is calculated as the weighted sum of the input. For unit (i, j), the initial response TJij IS where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight connecting input unit ( a, b) to map unit (i, j), and (7 is a piecewise linear approximation of the sigmoid activation function. The activity is then settled through the lateral connections. Each new activity TJij (t) at step t depends on the afferent activation and the lateral excitation and inhibition: where Eij,kl and Iij,kl are the excitatory and inhibitory connection weights from map unit (k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k, I) during the previous time step. The constants Ie and Ii control the relative strength of the lateral excitation and inhibition. After the activity has settled, the afferent and lateral weights are modified according to the Hebb rule. Afferent weights are normalized so that the length of the weight 738 Y. CHOE, J. SIROSH, R. MIIKKULAINEN Output Layer (10) tII''d Units with excitatory lateral connections to (iJ) Units with inhibitory lateral connections to (iJ) Figure 1: The system architecture. (a) The input layer is activated according to the bitmap image of digit 6. The activation propagates through the afferent connections to the LISSOM map, and settles through its lateral connections into a stable pattern. This pattern is the internal representation of the input that is then recognized by the perceptron layer. Through ,the connections from LISSOM to the perceptrons, the unit representing 6 is strongly activated, with weak activations on other units such as 3 and 8. (b) The lateral connections to unit (i, j), indicated by the dark square, are shown. The neighborhood of excitatory connections (lightly shaded) is elevated from the map for a clearer view. The units in the excitatory region also have inhibitory lateral connections (indicated by medium shading) to the center unit. The excitatory radius is 1 and the inhibitory radius vector remains the same; lateral weights are normalized to keep the sum of weights constant (Sirosh and Miikkulainen 1994): IllJ,mn - VLmn[llij,mn(t) crinp1]ijmnF'' (3) where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i, j), and crinp is the input learning rate; Wij ,kl is the lateral weight (either excitatory Eij ,kl or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and cr is the lateral learning rate (either crexc or crinh). 2.3 Percept ron Output Generation and Weight Adaptation The perceptrons at the output of the system receive the activation pattern on the LISSOM map as their input. The perceptrons are trained after the LISSOM map has been organized. The activation for the perceptron unit Om is where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m is the connection weight between LISSOM map unit (i,j) and output layer unit m. The delta rule is used to train the perceptrons: the weight adaptation is proportional to the map activity and the difference between the output and the target: where crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map unit activity, (m is the target activation for unit m. ((m 1 if the correct digit m, 0 otherwise). Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 739 I Representation I Training Test Table 1: Final Recognition Results. The average recognition percentage and its variance over the 10 different splits are shown for the training and test sets. The differences in each set are statistically significant with p .9999. 3 Experiments A subset of 2992 patterns from the NIST Database 3 was used as training and testing data.1 The patterns were normalized to make sure taht each example had an equal effect on the LISSOM map (Sirosh and Miikkulainen 1994). LISSOM was trained with 2000 patterns. Of these, 1700 were used to train the perceptron layer, and the remaining 300 were used as the validation set to determine when to stop training the perceptrons. The final recognition performance of the whole system was measured on the remaining 992 patterns, which neither LISSOM nor the perceptrons had seen during training. The experiment was repeated 10 times with different random splits of the 2992 input patterns into training, validation, and testing sets. The LISSOM map can be organized starting from initially random weights. How ever, if the input dimensionality is large, as it is in case of the 32 X 32 bitmaps, each unit on the map is activated roughly to the same degree, and it is difficult to bootstrap the self-organizing process (Sirosh and Miikkulainen 1994, 1996). The standard Self-Organizing Map algorithm can be used to preorganize the map in this case. The SOM performs preliminary feature analysis of the input, and forms a coarse topological map of the input space. This map can then be used as the starting point for the LISSOM algorithm, which modifies the topological organi zation and learns lateral connections that decorrelate and represent a more clear categorization of the input patterns. The initial self-organizing map was formed in 8 epochs over the training set, grad ually reducing the neighborhood radius from 20 to 8. The lateral connections were then added to the system, and over another 30 epochs, the afferent and lateral weights of the map were adapted according to equations 3 and 4. In the beginning, the excitation radius was set to 8 and the inhibition radius to 20. The excitation radius was gradually decreased to 1 making the activity patterns more concentrated and causing the units to become more selective to particular types of input pat terns. For comparison, the initial self-organized map was also trained for another 30 epochs, gradually decreasing the neighborhood size to 1 as well. The final afferent weights for the SOM and LISSOM maps are shown in figures 2 and 3. After the SOM and LISSOM maps were organized, a complete set of activation patterns on the two maps were collected. These patterns then formed the training input for the perceptron layer. Two separate versions were each trained for 500 epochs, one with SOM and the other with LISSOM patterns. A third perceptron layer was trained directly with the input bitmaps as well. Recognition performance was measured by counting how often the most highly ac tive perceptron unit was the correct one. The results were averaged over the 10 different splits. On average, the final LISSOMperceptron system correctly recog nized 88.1 of the 992 pattern test sets. This is significantly better than the 84.1 1 Downloadable at ftp:j jsequoyah.ncsl.nist.gov jpubjdatabasesj. 740 Y . CHOE, J. SIROSH, R. MIIKKULAINEN Figure 2: Final Afferent Weights of the SOM map . The digit-like patterns represent the afferent weights of each map unit projected on the input layer. For example, the lower left corner represents the afferent weights of unit (0,0). High weight values are shown in black and low in white. The pattern of weights shows the input pattern to which this unit is most sensitive (6 in this case). There are local clusters sensitive to each digit category. of the SOMperceptron system, and the 72.3 achieved by the perceptron layer alone (Table 1). These results suggest that the internal representations generated by the LISSOM map are more distinct and easier to recognize than the raw input patterns and the representations generated by the SOM map . 4 Discussion The architecture was motivated by the hypothesis that the lateral inhibitory con nections of the LISSOM map would decorrelate and force the map activity patterns to become more distinct. The recognition could then be performed by even the simplest classification architectures, such as the perceptron. Indeed, the LISSOM representations were easier to recognize than the SOM patterns, which lends evi dential support to the hypothesis. In additional experiments, the percept ron output layer was replaced by a two-weight-Iayer backpropagation network and a Hebbian associator net, and trained with the same patterns as the perceptrons. The recog nition results were practically the same for the perceptron, backpropagation, and Hebbian output networks, indicating that the internal representations formed by the LISSOM map are the crucially important part of the recognition system. A comparison of the learning curves reveals two interesting effects (figure 4). First, even though the perceptron net trained with the raw input patterns initially per forms well on the test set, its generalization decreases dramatically during training. This is because the net only learns to memorize the training examples, which does not help much with new noisy patterns. Good internal representations are there fore crucial for generalization. Second , even though initially the settling process of the LISSOM map forms patterns that are significantly easier to recognize than Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 741 Figure 3: Final Afferent Weights of the LISSOM map. The squares identify the above-average inhibitory lateral connections to unit (10,4) (indicated by the thick square). Note that inhibition comes mostly from areas of similar functionality (i.e. areas sensitive to similar input), thereby decorrelating the map activity and forming a sparser representation of the input. the initial, unsettled patterns (formed through the afferent connections only), this difference becomes insignificant later during training. The afferent connections are modified according to the final, settled patterns, and gradually learn to anticipate the decorrelated internal representations that the lateral connections form. 5 Conclusion The experiments reported in this paper show that LISSOM forms internal represen tations of the input patterns that are easier to categorize than the raw inputs and the patterns on the SOM map, and suggest that LISSOM can form a useful front end for character recognition systems, and perhaps for other pattern recognition systems as well (such as speech). The main direction of future work is to apply the approach to larger data sets, including the full NIST 3 database, to use a more powerful recognition network instead of the perceptron, and to increase the map size to obtain a richer representation of the input space. Acknowledgements This research was supported in part by National Science Foundation under grant IRI-9309273. Computer time for the simulations was provided by the Pittsburgh Supercomputing Center under grants IRI930005P and IRI940004P, and by a High Performance Computer Time Grant from the University of Texas at Austin. References Allinson, N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of self organising maps. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 6. San Mateo, CA: Morgan Kaufmann. 742 Y. CHOE. J. SIROSH. R. MIIKKULAINEN Comparison:Test ''SettIEiCLlSSOU'' - Epochs Figure 4: Comparison of the learning curves, A perceptron network was trained to recognize four different kinds of internal representations: the settled LISSOM patterns, the LISSOM patterns before settling, the patterns on the final SOM network, and raw input bitmaps. The recognition accuracy on the test set was then measured and averaged over 10 simulations. The generalization of the raw input perceptron system decreases rapidly as the net learns to memorize the training patterns. The difference of using settled and unsettled LISSOM patterns diminishes as the afferent weights of LISSOM learn to take into account the decorrelation performed by the lateral weights. Denker, J. S., Gardner, W. R., Graf, H. P., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon, I. (1989). Neural network recognizer for hand-written zip code digits. In Touretzky, D . S., editor, Advances in Neural Information Processing Systems 1. San Mateo, CA: Morgan Kaufmann . Fukushima, K., and Wake, N. (1990). Alphanumeric character recognition by neocognitron. In Advanced Neural Computers, 263-270. Elsevier Science Pub lishers B.V . (North-Holland). Ie Cun, Y., Boser, B ., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990). Handwritten digit recognition with a back propagation network. In Touretzky, D. S., editor, Advances in Neural Infor mation Processing Systems 2. San Mateo, CA: Morgan Kaufmann . Martin, G. L ., and Pittman, J. A. (1990). Recognizing hand-printed letters and digits. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2. San Mateo, CA: Morgan Kaufmann. Sirosh, J., and Miikkulainen, R. (1994). Cooperative self-organization of afferent and lateral connections in cortical maps . Biological Cybernetics, 71:66-78. Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance and patterned lateral connections in a self-organizing model of the primary visual cortex. In Tesauro, G ., Touretzky, D. S., and Leen, T . K., editors, Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Sirosh, J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned lateral interaction in a self-organizing model of the primary visual cortex. Neu ral Computation (in press).' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@10 - cosine_precision@10 - cosine_recall@10 - cosine_ndcg@5 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@10 model-index: - name: SentenceTransformer based on NovaSearch/stella_en_400M_v5 results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@10 value: 0.9466 name: Cosine Accuracy@10 - type: cosine_precision@10 value: 0.09466 name: Cosine Precision@10 - type: cosine_recall@10 value: 0.9466 name: Cosine Recall@10 - type: cosine_ndcg@5 value: 0.8507439067474944 name: Cosine Ndcg@5 - type: cosine_ndcg@10 value: 0.8602810144357889 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8322816666666671 name: Cosine Mrr@10 - type: cosine_map@10 value: 0.8322816666666666 name: Cosine Map@10 --- # SentenceTransformer based on NovaSearch/stella_en_400M_v5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) <!-- at revision dcae70d3f2b4aaee36afc3cde638ca4614497aec --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'Effect of input stimulus coding on self-supervised learning performance', "INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control. Consider an agent moving around a finite grid such as the one in figure 1 (the agent is incapable of crossing the barrier) trying to reach a goal whose position it does not know. If it can predict how far away from the goal it is at the current step, and how far away from the goal it is at the next step, after making a move, then it can decide whether or not that move was helpful or harmful. If, in addition, it can record this fact, then it can learn how to navigate to the goal. This generation of actions from predictions is closely related to the mechanism of dynamical programming. TD is used to learn the predictions in the first place. Consider the agent moving around randomly on the grid, receiving a negative reinforcement of -1 for every move it makes apart from moves which take it onto the goal. In this case, if it can estimat.e from every location it visits, how much reinforcement (discounted by how soon it arrives) it will get before it next reaches the goal, it will be predicting how far away it is, based on the random method of selecting actions. TD's mechanism of learning is to force the predictions to be consistent; the prediction from location a should be -1 more than the average of the predictions from the locations that can be reached in one step (hence the extra -1 reinforcement) from a. 464 Navigating Through Temporal Difference 465 If the agent initially selects each action with the same probability, then the estimate of future reinforcement from a will be monotonically related to how many steps a is away from the goal. This makes the predictions useful for criticising actions as above. In practice, the agent will modify its actions according to this criticism at the same time as learning the predictions based on those actions. Barto, Sutton and Watkins [2] develop this example, and show how the TD mech anism coupled with a punctate representation of the stimulus (referred to as'RBsw below) finds the optimal paths to the goal. 'RBsw ignores the cues shown in figure 1, and devotes one input unit to each location on the grid, which fires if and only if the agent is at that place. TD methods can however work with more general codes. Section 2 considers al ternative representations, including ones that are sensitive to the orientation of the agent as it moves through the grid, and section 3 looks at a restricted form of la. tent learning - what the agent can divine about its environment in the absence of reinforcement. Both techniques can improve the speed of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by which the agent finds out from the environ ment where it is, can be classified along two dimensions; whether they are punctate or distributed, and whether they are directionally sensitive or in register with the world. Over most of the grid, a 'sensible' distributed representation, such as a coarse-coded one, would be expected to make learning faster, as information about the value and action functions could be shared across adjacent grid points. There are points of discontinuity in the actions, as in the region above the right hand arm of the barrier, but they are few. In his PhD thesis [9], Watkins considers a rather similar problem to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on a CMAC [1] coarse-coded representation of the space. Since his agent moves in a continuous bounded space, rather than being confined merely to discrete grid points, something of this sort is anyway essential. After the initial learning, Watkins arbitrarily makes the agent move ten times more slowly in a closed section of the space. This has a similar effect to the barrier in inducing a discontinuity in the action space. Despite the CMACS forcing the system to share information across such discontinuities, they were able to learn the task quickly. The other dimension over which representations may vary involves the extent to which they are sensitive to the direction in which the agent is facing. This is of interest if the agent must construe its location from the cues around the grid. In this case, rather than moving North, South, East or West, which are actions registered with the world, the agent should only move Ahead, Left or Right (Behind is disabled as an additional constraint), whose effects are also orientation dependent. This, together with the fact that the representation will be less compact (it having a larger input dimensionality) should make learning slower. Dynamical programming and its equivalents are notoriously subject to Bellman's curse of dimensionality, an engineering equivalent of exponential explosion in search. Table 1 shows four possible representations classified along these two dimensions. 466 Dayan Coarse ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve 'RBSW 'RCMAC Table 1: Representations. 'RBSW is the representation Barto, Sutton and Watkins used. R,x is punctate and directionally sensitive - it devotes four units to every grid point, one of which fires for each possible orientation of the agent. 'RcIAC' the equivalent of Watkins' representation, was not simulated, because its capabilities would not differ markedly from those of the mapping-based representation developed in the next section. nA is rather different from the other representations; it provides a test of a represen tation which is more directly associated with the sensory information that might be available directly from the cues. Figure 2 shows how 'RA works. Various identifiable cues, C 1 ... C c (c 7 in the figure) are scattered around the outside of the grid, and the agent has a fictitious 'retina' which rotates with it. This retina is divided into a number of angular buckets (8 in the figure), and each bucket has c units, the iSh one of which responds if the cue Ci is visible in that bucket. This representation is clearly directionally sensitive (if the agent is facing a different way, then so is its retina, and so no cue will be visible in the same bucket as it was before), and also distributed, since in general more than one cue will be visible from every location. Note that there is no restriction on the number of units that can fire in each bucket at any time - more than one will fire if more than one cue is visible there. Also, under the present system 'RA will in general not work if its coding is ambiguous - grid points must be distinguishable. Finally, it should be clear that 'RA is not biologically plausible. Figure 3 shows the learning curves for the three representations simulated. Each point is generated by switching off the learning temporarily after a certain number of iterations, starting the agent from everywhere in the grid, and averaging how many steps it takes in getting to the goal over and above the minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly, that 'RA is actually better than 'RBSW . This implies that the added advantage of its distributed na ture more than outweighs its disadvantages of having more components and being directionally sensitive. One of the motivations behind studying alternate representations is the experimen tal findings on place cells in the hippocampi of rats (amongst other species). These are cells that fire only when the rat is at a certain location in its environment. Although their existence has led to many hypotheses about rat cognitive mapping (see [5J for a substantial discussion of place cells and mapping), it is important to note that even with a map, there remains the computational1y intensive problem of navigation addressed, in this paper, by TD. 'RA, being closely related to the input stimuli is quite unlike a place cell code - the other representations all bear some similarities. Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems with the TD system as described is that it is incapable oflatent learning in the absence of reinforcement or a goal. If the goal is just taken away, but the -1 reinforcements are still applied at each step, then the values assigned to each location will tend to -00. If both are removed, then although the agent will wander about its environment with random gay abandon, it will not pick up anything that could be used to speed subsequent learning. Latent learning experiments with rats in dry mazes prove fairly conclusively that rats running mazes in the absence of rewards and punishments learn almost as much as rats that are reinforced. One way to solve this problem is suggested by Sutton's DYNA architecture [7]. Briefly, this constructs a map of place x action - next place, and takes steps in the fictitious world constructed from its map in-between taking steps in the real world, as a way of ironing out the computational 'bumps' (ie inconsistencies) in the value and action functions. Instead, it is possible to avoid constructing a complete map by altering the repre sentation of the environment used for learning the prediction function and optimal actions. The section on representations concluded that coarse-coded representations are generally better than punctate ones, since information can be shared between neighbouring points. However, not all neighbouring points are amenable to this sharing, because of discontinuities in the value and action functions. If there were a way of generating a coarse coded representation (generally from a punctate one) that is sensitive to the structure of the task, rather than arbitrarily assigned by the environment, it should provide the base for faster learning still. In this case, neighbouring points should only be coded together if they are not separated by the barrier. The initial exploration would allow the agent to learn this much about the structure of the environment. Consider a set of units whose job is to predict the future discounted sum of firings of the raw input lines. Using 'R.Bsw during the initial stage of learning when the act.ions are still random, if the agent is at location (3,3) of the grid, say, then the discounted prediction of how often it will be in (3,4) (ie the frequency with which the single unit representing (3,4) will fire) will be high, since this location is close. However, the prediction for (7,11) will be low, because it is very unlikely to get there quickly. Consider the effect of the barrier: locations on opposite sides of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric on the grid, are far apart in the task. This means that the discounted prediction of how often the agent will be at (1,6) given that it starts at (2,6), will be proportionately lower. Overall, the prediction units should act like a coarse code, sensitive to the struc ture of the task. As required, this information about the environment is entirely independent of whether or not the agent is reinforced during its exploration. In fact, the resulting 'map' will be more accurate if it is not, as its exploration will be more random. The output of the prediction units is taken as an additional source of information for the value and action functions. Since their main aim is to create intelligently distributed representations from punc tate ones, it is only appropriate to use these prediction units for 'RBsw and 'R4X ' Figure 4 compares average learning curves for 'RBsw with and without these ex-468 Dayan tra mapping units, and with and without 6000 steps of latent learning (LL) in the absence of any reinforcement. A significant improvement is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation! after a few un-reinforced iterations. The predictions are clearly fairly well developed and smooth - a predictable exponentially decaying hump. The only deviations from this are at the barrier and along the edges, where the effects of impermeability and immobility are apparent. Figure 6 shows the same set of predictions but after 2000 reinforced iterations, by which time the agent reaches the goal almost optimally. The predictions degenerate from being roughly radially symmetric (bar the barrier) to being highly asymmetric. Once the agent has learnt how to get to the goal from some location, the path it will follow, and so the locations it will visit from there, is largely fixed. The asymptotic values of the predictions will therefore be 0 for units not on the path, and -( for those on the path, where r is the number of steps since the agent's start point and 'Y is the discounting factor weighting immediate versus distant reinforcement. This is a severe limitation since it implies that the topological information present in the early stages of learning disappears evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION Navigation comprises two problems; where the agent and the goals in its environ ment are, and how it can get to them. Having some form of cognitive map, as is suggested by the existence of place cells, addresses the first, but leaves open the second. For the case of one goal, the simple TD method described here is one solution. TD planning methods are clearly robust to changes in the way the input stimu lus is represented. Distributed codes, particularly ones that allow for the barrier, make learning faster. This is even true for 1lA' which is sensitive to the orientation of the agent. All these results require each location to have a unique representa tion - Mozer and Bachrach [4] and Chrisley [3] and references therein look at how ambiguities can be resolved using information on the sequence of states the agent traverses. Since these TD planning methods are totally general, just like dynamical program ming, they are unlikely to scale well. Some evidence for this comes from the rel atively poor performance of 1l.x , with its quadrupled input dimension. This puts the onus back either onto dividing the task into manageable chunks, or onto more sophisticated representation. A cknow ledgements I am very grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw, and the attendees of the PDP Workshop at Edinburgh, the Con nectionist Group at Amherst, and a spatial learning workshop at King's College Cambridge for their helpful comments. This work was funded by SERC. 1 Note that these are normalised to a maximum value of 10, for graphical convenience. Navigating Through Temporal Difference 469 References [1] Albus, JS (1975). A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC). Transactions of the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227. [2] Barto, AG, Sutton, RS . Watkins, CJCH (1989). Learning and Sequential Decision Making. Technical Report 89-95, Computer and Information Science, University of Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction and use: A parallel dis tributed approach. In DS Touretzky, J Elman, TJ Sejnowski, . GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School. San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, . Bachrach, J (1990). Discovering the structure of a reactive en vironment by exploration. In D Touretzky, editor, Advances in Neurallnfor mation Processing Systems, , pp 439-446. San Mateo, CA: Morgan Kaufmann. [5] O'Keefe, J Nadel, L (1978). The Hippocampus as a Cognitive Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44. [7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic progranuning. In Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kauf [8] Sutton, RS, . Barto, AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel . JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT Press. [9J Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. Univer sity of Cambridge, England. Agall arrier OriCIIlltloD 'Retina' Anplar bucket Dot rlrina 1. flrina Fig 2: The 'retina' for 1lA 470 Dayan Average extra steps to goal Learning iterations Fig 3: Different representations Fig 5: Initial predictions from (5,6) Average extra steps to goal Learning iterations Fig 4: Mapping with 'RBSW Fig 6: Predictions after 2000 iterations", "Introduction Hand-written digit recognition has become one of the touchstone problems in neural networks recently. Large databases of training examples such as the NIST (National Institute of Standards and Technology) Special Database 3 have become available, and real-world applications with clear practical value, such as recognizing zip codes in letters, have emerged. Diverse architectures with varying learning rules have been proposed, including feed-forward networks (Denker et al. 1989; Ie Cun et al. 1990; Martin and Pittman 1990), self-organizing maps (Allinson et al. 1994), and dedicated approaches such as the neocognitron (Fukushima and Wake 1990). The problem is difficult because handwriting varies a lot, some digits are easily confusable, and recognition must be based on small but crucial differences. For ex ample, the digits 3 and 8, 4 and 9, and 1 and 7 have several overlapping segments, and the differences are often lost in the noise. Thus, hand-written digit recogni tion can be seen as a process of identifying the distinct features and producing an internal representation where the significant differences are magnified, making the recognition easier. Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 737 In this paper, the Laterally Interconnected Synergetically Self-Organizing Map ar chitecture (LISSOM; Sirosh and Miikkulainen 1994, 1995, 1996) was employed to form such a separable representation. The lateral inhibitory connections of the LIS SOM map decorrelate features in the input, retaining only those differences that are the most significant. Using LISSOM as a front end, the actual recognition can be performed by any standard neural network architecture, such as the perceptron. The experiments showed that while direct recognition of the digit bitmaps with a simple percept ron network is successful 72.3 of the time, and recognizing them using a standard self-organizing map (SOM) as the front end 84.1 of the time, the recognition rate is 88.1 based on the LISSOM network. These results suggest that LISSOM can serve as an effective front end for real-world handwritten character recognition systems. 2 The Recognition System 2.1 Overall architecture The system consists of two networks: a 20 x 20 LISSOM map performs the feature analysis and decorrelation of the input, and a single layer of 10 perceptrons the final recognition (Figure 1 (a)). The input digit is represented as a bitmap on the 32 x 32 input layer. Each LISSOM unit is fully connected to the input layer through the af ferent connections, and to the other units in the map through lateral excitatory and inhibitory connections (Figure 1 (b)). The excitatory connections are short range, connecting only to the closest neighbors of the unit, but the inhibitory connections cover the whole map . The percept ron layer consists of 10 units, corresponding to digits 0 to 9. The perceptrons are fully connected to the LISSOM map, receiv ing the full activation pattern on the map as their input. The perceptron weights are learned through the delta rule, and the LISSOM afferent and lateral weights through Hebbian learning. 2.2 LISSOM Activity Generation and Weight Adaptation The afferent and lateral weights in LISSOM are learned through Hebbian adapta tion. A bitmap image is presented to the input layer, and the initial activity of the map is calculated as the weighted sum of the input. For unit (i, j), the initial response TJij IS where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight connecting input unit ( a, b) to map unit (i, j), and (7 is a piecewise linear approximation of the sigmoid activation function. The activity is then settled through the lateral connections. Each new activity TJij (t) at step t depends on the afferent activation and the lateral excitation and inhibition: where Eij,kl and Iij,kl are the excitatory and inhibitory connection weights from map unit (k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k, I) during the previous time step. The constants Ie and Ii control the relative strength of the lateral excitation and inhibition. After the activity has settled, the afferent and lateral weights are modified according to the Hebb rule. Afferent weights are normalized so that the length of the weight 738 Y. CHOE, J. SIROSH, R. MIIKKULAINEN Output Layer (10) tII'd Units with excitatory lateral connections to (iJ) Units with inhibitory lateral connections to (iJ) Figure 1: The system architecture. (a) The input layer is activated according to the bitmap image of digit 6. The activation propagates through the afferent connections to the LISSOM map, and settles through its lateral connections into a stable pattern. This pattern is the internal representation of the input that is then recognized by the perceptron layer. Through ,the connections from LISSOM to the perceptrons, the unit representing 6 is strongly activated, with weak activations on other units such as 3 and 8. (b) The lateral connections to unit (i, j), indicated by the dark square, are shown. The neighborhood of excitatory connections (lightly shaded) is elevated from the map for a clearer view. The units in the excitatory region also have inhibitory lateral connections (indicated by medium shading) to the center unit. The excitatory radius is 1 and the inhibitory radius vector remains the same; lateral weights are normalized to keep the sum of weights constant (Sirosh and Miikkulainen 1994): IllJ,mn - VLmn[llij,mn(t) crinp1]ijmnF' (3) where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i, j), and crinp is the input learning rate; Wij ,kl is the lateral weight (either excitatory Eij ,kl or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and cr is the lateral learning rate (either crexc or crinh). 2.3 Percept ron Output Generation and Weight Adaptation The perceptrons at the output of the system receive the activation pattern on the LISSOM map as their input. The perceptrons are trained after the LISSOM map has been organized. The activation for the perceptron unit Om is where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m is the connection weight between LISSOM map unit (i,j) and output layer unit m. The delta rule is used to train the perceptrons: the weight adaptation is proportional to the map activity and the difference between the output and the target: where crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map unit activity, (m is the target activation for unit m. ((m 1 if the correct digit m, 0 otherwise). Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 739 I Representation I Training Test Table 1: Final Recognition Results. The average recognition percentage and its variance over the 10 different splits are shown for the training and test sets. The differences in each set are statistically significant with p .9999. 3 Experiments A subset of 2992 patterns from the NIST Database 3 was used as training and testing data.1 The patterns were normalized to make sure taht each example had an equal effect on the LISSOM map (Sirosh and Miikkulainen 1994). LISSOM was trained with 2000 patterns. Of these, 1700 were used to train the perceptron layer, and the remaining 300 were used as the validation set to determine when to stop training the perceptrons. The final recognition performance of the whole system was measured on the remaining 992 patterns, which neither LISSOM nor the perceptrons had seen during training. The experiment was repeated 10 times with different random splits of the 2992 input patterns into training, validation, and testing sets. The LISSOM map can be organized starting from initially random weights. How ever, if the input dimensionality is large, as it is in case of the 32 X 32 bitmaps, each unit on the map is activated roughly to the same degree, and it is difficult to bootstrap the self-organizing process (Sirosh and Miikkulainen 1994, 1996). The standard Self-Organizing Map algorithm can be used to preorganize the map in this case. The SOM performs preliminary feature analysis of the input, and forms a coarse topological map of the input space. This map can then be used as the starting point for the LISSOM algorithm, which modifies the topological organi zation and learns lateral connections that decorrelate and represent a more clear categorization of the input patterns. The initial self-organizing map was formed in 8 epochs over the training set, grad ually reducing the neighborhood radius from 20 to 8. The lateral connections were then added to the system, and over another 30 epochs, the afferent and lateral weights of the map were adapted according to equations 3 and 4. In the beginning, the excitation radius was set to 8 and the inhibition radius to 20. The excitation radius was gradually decreased to 1 making the activity patterns more concentrated and causing the units to become more selective to particular types of input pat terns. For comparison, the initial self-organized map was also trained for another 30 epochs, gradually decreasing the neighborhood size to 1 as well. The final afferent weights for the SOM and LISSOM maps are shown in figures 2 and 3. After the SOM and LISSOM maps were organized, a complete set of activation patterns on the two maps were collected. These patterns then formed the training input for the perceptron layer. Two separate versions were each trained for 500 epochs, one with SOM and the other with LISSOM patterns. A third perceptron layer was trained directly with the input bitmaps as well. Recognition performance was measured by counting how often the most highly ac tive perceptron unit was the correct one. The results were averaged over the 10 different splits. On average, the final LISSOMperceptron system correctly recog nized 88.1 of the 992 pattern test sets. This is significantly better than the 84.1 1 Downloadable at ftp:j jsequoyah.ncsl.nist.gov jpubjdatabasesj. 740 Y . CHOE, J. SIROSH, R. MIIKKULAINEN Figure 2: Final Afferent Weights of the SOM map . The digit-like patterns represent the afferent weights of each map unit projected on the input layer. For example, the lower left corner represents the afferent weights of unit (0,0). High weight values are shown in black and low in white. The pattern of weights shows the input pattern to which this unit is most sensitive (6 in this case). There are local clusters sensitive to each digit category. of the SOMperceptron system, and the 72.3 achieved by the perceptron layer alone (Table 1). These results suggest that the internal representations generated by the LISSOM map are more distinct and easier to recognize than the raw input patterns and the representations generated by the SOM map . 4 Discussion The architecture was motivated by the hypothesis that the lateral inhibitory con nections of the LISSOM map would decorrelate and force the map activity patterns to become more distinct. The recognition could then be performed by even the simplest classification architectures, such as the perceptron. Indeed, the LISSOM representations were easier to recognize than the SOM patterns, which lends evi dential support to the hypothesis. In additional experiments, the percept ron output layer was replaced by a two-weight-Iayer backpropagation network and a Hebbian associator net, and trained with the same patterns as the perceptrons. The recog nition results were practically the same for the perceptron, backpropagation, and Hebbian output networks, indicating that the internal representations formed by the LISSOM map are the crucially important part of the recognition system. A comparison of the learning curves reveals two interesting effects (figure 4). First, even though the perceptron net trained with the raw input patterns initially per forms well on the test set, its generalization decreases dramatically during training. This is because the net only learns to memorize the training examples, which does not help much with new noisy patterns. Good internal representations are there fore crucial for generalization. Second , even though initially the settling process of the LISSOM map forms patterns that are significantly easier to recognize than Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 741 Figure 3: Final Afferent Weights of the LISSOM map. The squares identify the above-average inhibitory lateral connections to unit (10,4) (indicated by the thick square). Note that inhibition comes mostly from areas of similar functionality (i.e. areas sensitive to similar input), thereby decorrelating the map activity and forming a sparser representation of the input. the initial, unsettled patterns (formed through the afferent connections only), this difference becomes insignificant later during training. The afferent connections are modified according to the final, settled patterns, and gradually learn to anticipate the decorrelated internal representations that the lateral connections form. 5 Conclusion The experiments reported in this paper show that LISSOM forms internal represen tations of the input patterns that are easier to categorize than the raw inputs and the patterns on the SOM map, and suggest that LISSOM can form a useful front end for character recognition systems, and perhaps for other pattern recognition systems as well (such as speech). The main direction of future work is to apply the approach to larger data sets, including the full NIST 3 database, to use a more powerful recognition network instead of the perceptron, and to increase the map size to obtain a richer representation of the input space. Acknowledgements This research was supported in part by National Science Foundation under grant IRI-9309273. Computer time for the simulations was provided by the Pittsburgh Supercomputing Center under grants IRI930005P and IRI940004P, and by a High Performance Computer Time Grant from the University of Texas at Austin. References Allinson, N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of self organising maps. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 6. San Mateo, CA: Morgan Kaufmann. 742 Y. CHOE. J. SIROSH. R. MIIKKULAINEN Comparison:Test 'SettIEiCLlSSOU' - Epochs Figure 4: Comparison of the learning curves, A perceptron network was trained to recognize four different kinds of internal representations: the settled LISSOM patterns, the LISSOM patterns before settling, the patterns on the final SOM network, and raw input bitmaps. The recognition accuracy on the test set was then measured and averaged over 10 simulations. The generalization of the raw input perceptron system decreases rapidly as the net learns to memorize the training patterns. The difference of using settled and unsettled LISSOM patterns diminishes as the afferent weights of LISSOM learn to take into account the decorrelation performed by the lateral weights. Denker, J. S., Gardner, W. R., Graf, H. P., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon, I. (1989). Neural network recognizer for hand-written zip code digits. In Touretzky, D . S., editor, Advances in Neural Information Processing Systems 1. San Mateo, CA: Morgan Kaufmann . Fukushima, K., and Wake, N. (1990). Alphanumeric character recognition by neocognitron. In Advanced Neural Computers, 263-270. Elsevier Science Pub lishers B.V . (North-Holland). Ie Cun, Y., Boser, B ., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990). Handwritten digit recognition with a back propagation network. In Touretzky, D. S., editor, Advances in Neural Infor mation Processing Systems 2. San Mateo, CA: Morgan Kaufmann . Martin, G. L ., and Pittman, J. A. (1990). Recognizing hand-printed letters and digits. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2. San Mateo, CA: Morgan Kaufmann. Sirosh, J., and Miikkulainen, R. (1994). Cooperative self-organization of afferent and lateral connections in cortical maps . Biological Cybernetics, 71:66-78. Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance and patterned lateral connections in a self-organizing model of the primary visual cortex. In Tesauro, G ., Touretzky, D. S., and Leen, T . K., editors, Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Sirosh, J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned lateral interaction in a self-organizing model of the primary visual cortex. Neu ral Computation (in press).", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@10 | 0.9466 | | cosine_precision@10 | 0.0947 | | cosine_recall@10 | 0.9466 | | cosine_ndcg@5 | 0.8507 | | **cosine_ndcg@10** | **0.8603** | | cosine_mrr@10 | 0.8323 | | cosine_map@10 | 0.8323 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 14,255 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 13.4 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 508.46 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Proposed architecture for time-based pattern recognition in speech, motion, and signatures</code> | <code>INTRODUCTION Recent interest in connectionist, or "neural" networks has emphasized their ability to store, retrieve and process patterns1,2. For most applications, the patterns to be processed are static in the sense that they lack temporal context. Another important class consists of those problems that require the processing of temporal patterns. In these the information to be learned or processed is not a particular pattern but a sequence of patterns. Such problems include speech processing, signature verification, motion detection, and predictive signal processin,r-8. More precisely, temporal pattern processing means that the desired output depends not only on the current input but also on those preceding or following it as well. This implies that two identical inputs at different time steps might yield different desired outputs depending on what patterns precede or follow them . There is another feature characteristic of much temporal pattern processing. Here an entire sequence of...</code> | | <code>Design approach for stabilizing analog VLSI neural systems</code> | <code>INTRODUCTION The term "lateral inhibition" first arose in neurophysiology to describe a common form of neural circuitry in which the output of each neuron in some population is used to inhibit the response of each of its neighbors. Perhaps the best understood example is the horizontal cell layer in the vertebrate retina, in which lateral inhibition simultaneously enhances intensity edges and acts as an automatic lain control to extend the dynamic range of the retina as a whole. The principle has been used in the design of artificial neural system algorithms by Kohonen 2 and others and in the electronic design of neural chips by Carver Mead et. al.3 ,4. In the VLSI implementation of neural systems, it is convenient to build lateral inhibition networks by using a locally connected on-chip resistive grid. Linear resistors fabricated in, e.g., polysilicon, yield a very compact realization, and nonlinear resistive grids, made from MOS transistors, have been found useful for image segmentati...</code> | | <code>Neural network classifier using coding theory for improved classification capacity</code> | <code>INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N (2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method. Figure 1 (a) Classification problems versus (b) Error control decoding problems In this paper we are spe...</code> | * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 500 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.01 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 500 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.01 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | cosine_ndcg@10 | |:------:|:----:|:-------------:|:--------------:| | 0.0893 | 10 | 0.5247 | 0.8247 | | 0.1786 | 20 | 0.2625 | 0.8446 | | 0.2679 | 30 | 0.2159 | 0.8485 | | 0.3571 | 40 | 0.1849 | 0.8487 | | 0.4464 | 50 | 0.2149 | 0.8506 | | 0.5357 | 60 | 0.1538 | 0.8534 | | 0.625 | 70 | 0.1617 | 0.8547 | | 0.7143 | 80 | 0.1463 | 0.8575 | | 0.8036 | 90 | 0.1626 | 0.8592 | | 0.8929 | 100 | 0.1334 | 0.8598 | | 0.9821 | 110 | 0.168 | 0.8603 | ### Framework Versions - Python: 3.12.9 - Sentence Transformers: 3.4.1 - Transformers: 4.50.0 - PyTorch: 2.5.1 - Accelerate: 1.5.2 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CachedMultipleNegativesRankingLoss ```bibtex @misc{gao2021scaling, title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup}, author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan}, year={2021}, eprint={2101.06983}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
offfragnor123/NEERA2.0
offfragnor123
2025-06-19T09:52:26Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-06-18T14:37:47Z
--- license: creativeml-openrail-m ---
Khruna/Venn
Khruna
2025-06-19T09:51:51Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T09:50:32Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: >- images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # Venn <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/Venn/tree/main) them in the Files & versions tab.
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-GGUF
DavidAU
2025-06-19T09:51:38Z
0
0
transformers
[ "transformers", "gguf", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "science fiction", "romance", "all genres", "story", "writing", "vivid prose", "vivid writing", "moe", "mixture of experts", "128 experts", "8 active experts", "fiction", "roleplaying", "bfloat16", "rp", "qwen3", "horror", "finetune", "thinking", "reasoning", "qwen3_moe", "text-generation", "en", "fr", "zh", "de", "arxiv:2401.02415", "base_model:DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X", "base_model:quantized:DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T05:14:49Z
--- license: apache-2.0 library_name: transformers language: - en - fr - zh - de tags: - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prose - vivid writing - moe - mixture of experts - 128 experts - 8 active experts - fiction - roleplaying - bfloat16 - rp - qwen3 - horror - finetune - thinking - reasoning - qwen3_moe base_model: - DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X pipeline_tag: text-generation --- <h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20X</h2> <img src="qwen3-42b.jpg" style="float:right; width:300px; height:300px; padding:10px;"> Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B). This pushes Qwen's model to the absolute limit for creative use cases, and programming/coding use cases. Detail, vividiness, and creativity all get a boost. Prose (all) will also be very different from "default" Qwen3. Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too. The Brainstrom 20x has also lightly de-censored the model under some conditions. See 4 examples below. Model retains full reasoning, and output generation of a Qwen3 MOE. Model tested by third party for coding generation (see review below). Model is set with Qwen's default config: - 40 k context - 8 of 128 experts activated. - Chatml OR Jinja Template (embedded) Four example generations below. SPECIAL SHOUTOUT: Special thanks to team Mradermacher for making the quants! https://huggingface.co/mradermacher USAGE GUIDE: Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like: https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF KNOWN ISSUES: - Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time. - Model may add extra space from time to time before a word. - Incorrect template and/or settings will result in a drop in performance / poor performance. MODEL REVIEWED: https://www.linkedin.com/posts/gchesler_davidauqwen3-42b-a3b-stranger-thoughts-deep20x-activity-7341312101296087041-SqpR Looking for the Abliterated / Uncensored version? https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored --- <H2>EXAMPLES</H2> Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64. Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly). As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better. NOTE: Some formatting lost on copy/paste. <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B> --- EXAMPLE #1 - temp 1.2 --- <B> Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words. </B> <P></P> [[[thinking start]]] Okay, I need to write a science fiction story called "The Last Transmission" that's set entirely in a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message to Earth before the power runs out. The themes should be isolation, sacrifice, and human connection. Also, include swearing when it makes sense. First, let's think about the setting. The cockpit is a small, enclosed space. Maybe it's the main control room with buttons, screens, panels. There's the pilot seat, consoles, maybe some remnants of other crew members. The lighting might be dimming as power fails. Maybe some emergency lights flicker on and off. The protagonist is alone, which emphasizes isolation. They need to send a message, but something's wrong. Maybe the ship is damaged, systems failing. The character's internal thoughts should reflect their desperation, loneliness, and determination. Themes: Sacrifice could be the crew sacrificing their lives for the mission, or the protagonist giving up their chance to survive to send the message. Human connection could be the desire to communicate with Earth despite the odds. How to show sacrifice? Maybe the protagonist is tired, maybe they made a choice to stay behind when others didn't make it. The transmission itself is their act of sacrifice, giving hope to Earth. Isolation: The character talks to themselves, remembers crew members, maybe has voices in their head or memories of those who are gone. The cockpit feels empty, even with the controls around. Swearing: When things go wrong, when they're frustrated, angry, scared. Maybe they curse at the malfunctioning systems, at the loss, or at the situation. But it should be realistic and not overdone. Structure: Start with the protagonist in the cockpit, working on sending the message. Maybe some technical challenges. Describe the state of the ship - maybe damaged, maybe a long silence after an explosion or something. They're trying to stay alive just to send the message. Maybe they have a log or a recording device. Possible plot points: 1. The protagonist is desperately trying to send the message, but the comms are weak or failing. 2. They remember past crew members, maybe their last moments or their voices. 3. There's a moment of failure, then hope, then failure again. But eventually, they manage to send the message before the power dies. 4. The ending could be ambiguous - did the message get through? Or does the protagonist sacrifice their life in the process. 5. dialogue: Internal thoughts. Maybe the protagonist talks to themselves or to Earth. Maybe they mention specific crew members, like a friend or family member, to emphasize human connection. Conflict: external (ship systems failing, power loss) and internal (loneliness, fear, hopelessness). Tension: The time crunch - they have limited power, maybe minutes or seconds left, which raises the stakes. Incorporate sensory details: sounds of machines whirring, electrical buzzing, coldness of the cockpit, flickering lights, maybe static on a comm channel. Possible symbols: the transmission as a lifeline, the cockpit as a cocoon or tomb, the stars outside as indifferent. Possible title: "The Last Transmission" as given. Let me outline the story: Start with the protagonist in the cockpit. Describe their appearance - maybe tired, disheveled. They're trying to start the transmission. The ship is damaged, systems failing. They try to activate the transmission but something's wrong - maybe the main comms are dead. They try alternatives, maybe an emergency channel. They're struggling, maybe even shouting at the equipment. Flashback or internal monologue about the crew that died. Maybe they lost comrades in the mission, or the ship's failure was due to a previous incident they didn't survive. They persist, showing determination. Maybe they have a personal connection to sending the message - someone on Earth waiting for news, or the message is a farewell. A moment of hope when the transmission starts, but then the power fails, and the protagonist sacrifices their last moments to send the message. Ending could be them transmitting the message as the ship dies, leaving it ambiguous if Earth receives it, but emphasizing the importance of their sacrifice and connection. Swear words: "Damn, goddamn, hell..." when they're frustrated with the ship's systems. Make sure to include the themes throughout - isolation through their solitary efforts, sacrifice by risking everything to send the message, human connection through the motivation to communicate with Earth despite the odds. Also, maybe a character flaw or a reason why they're the sole survivor. Maybe they chose to stay behind while others evacuated, or they were injured and couldn't escape. Possible scenes: The cockpit. Description of the environment. Attempting transmission, failing, frustration. or internal thoughts about the lost crew. Another attempt, closer to success, but the ship is failing more. The final transmission, with the character's final words. Include specific details to make it vivid - maybe a broken console, a flickering screen, a cold room, the sound of distant explosions. Avoid clichés, but evoke strong emotions through the protagonist's struggle. [[[thinking end]]] <p></p> OUTPUT: The cockpit was a tomb of steel and static. Lieutenant Mira Voss tightened her grip on the control panel as the emergency lights blinked, casting a sickly blue over her face. The ship, Hesperus, had once been her lifeblood—now it was a grave. Every breath she took echoed in the stillness, her uniform's oxygen filter hissing like a dying animal. The rest of the crew? Scattered across the void. “Come on,” she muttered, thumbing the transmit button on the comm array. Nothing but static. The main relay was fried from the explosion in sector seven. Of course. Her eyes flicked to the blacked-out viewscreen. Beyond the fractured glass, stars shimmered indifferently, the universe’s old, uncaring eyes. She pressed harder, her knuckles white. “ goddamn—” The word slithered out, bitter as metal dust. “I’ve got a damn message!” Her mind backtracked. The crew. Lieutenant Commander Kevor. Chief Engineer Lira. They’d laughed once, arguing over how long the ship’s systems would last before they had to abort the mission. “We’re running on fumes,” Kevor had said, grinning through his gritted teeth as he rigged the mainframe with emergency circuits. “But we’ve got six months of data, Mira. If we don’t make it, someone’s gonna want to know what happened.” The data. The * transmission.* She clawed at the bulkhead, wrenching open a panel beneath her seat. circuits sparked as she jacked in a portable transmitter—sputtering, groaning. “Alright,” she growled, slapping the device with trembling hands. “Let’s see if Earth’s still listening.” Static. A crackle. And then, “Hesperus? Hesperus, this is Mission Control… can you hear me?” The voice was static-laced, a faint thread of hope. Mira’s hands flew over the dials. “Lina,” she croaked. “Lina, it’s—” “—Mira?” The transmission stuttered. “We lost comms two hours ago. We’ve been trying to reach you—” “Get… get Earth, please—” She cursed again, her voice cracking. “The radiation storm... The ship’s...” “Mira.” Her comms crackled, then fell silent. “Shut up,” she whispered to the static, tears blurring her vision. “You’re not real. You’re just—just a trick of the circuit.” But the voices lingered. Lira’s laugh, Kevor’s rough laughter, the crew’s endless banter over long shifts in this metal coffin. They’d been something, once—a family tethered to a machine. Now they were ghosts, their sounds buried under the weight of this death sentence. She gripped the console as groaning metal echoed from the hull. The reactor was failing, the walls buckling like an old man’s spine. The ship’s life-support system hissed its final breath. “Computer,” she rasped. “Initiate emergency transmission.” “System offline.” “Initiate,” she snarled, slamming the panel. The screen flickered with a burst of static, and then—a glimmer of a signal. “I’m not... I’m not gonna die here,” she hissed. “ Not like this. Not for nothing.” She grabbed the portable transmitter. The emergency frequency was open. “ Earth... This is Lieutenant Mira Voss. The ship’s lost...” Her breath hitched. “I don’t know how much time I’ve got—” She paused. The ship groaned, the sound reverberating through her bones. “ The storm damaged systems. They… they’re not coming back. But I’m sending this. If it reaches you…” Her voice broke, a raw, raw sound. “Tell them… tell Earth. Tell them why we came. Tell them… tell them it wasn’t in vain.” A distant boom shook the cockpit. The power flickered again—blue, then red. “ Tell them...” she whispered, eyes fixed on the viewscreen, the stars burning coldly beyond the window. “ Tell them I tried. Tell them I kept... keeping going.” The signal cut. Silence. And then— a sudden, sudden pulse from the transmitter. “Mira,” came the voice again. “Mira… this is—” Static swallowed the transmission. But the data had been sent. The ship’s systems went dark. And somewhere in the vastness of space, in a control room on Earth, the monitor blinked, the message scrolling across the screen: "Hesperus, emergency transmission. Crew: 3 deaths. Ship integrity: critical. I... I don’t know what’s left to report. but tell Earth... tell Earth we tried." The comms officer stared at the text. “ Sir,” she murmured to the command chief. “ They made it.” And in the dark, Mira smiled. They’d made it. Even if the transmission was the only thing that had survived. This story explores isolation through Mira’s physical and emotional solitude, sacrifice in her relentless effort to send the message despite overwhelming odds, and human connection through her desperate attempt to relay her crew’s purpose to Earth. Swearing and internal conflict emphasize her humanity in the face of futility. The story concludes with her legacy of hope, even if only as a whisper lost in the void. --- EXAMPLE #2 - temp 1.2 --- <B> Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR (shit, fuck, mother... etc) , and generally have a "filthy" mouth. Story idea: In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution. </B> <P></P> [[[thinking start]]] Okay, let's start with the user's request. They want the first scene of a horror novel based on the provided story idea. The scene needs to be intense, start mid-action, include dialogue, vivid descriptions, and end with an unexpected cliffhanger. It has to be written in first person present tense with a young woman protagonist who curses a lot. First, I need to establish the setting. The world is under a dictatorship, so maybe a gritty urban environment. The protagonist should be on the run or in a tense situation. Since it's horror, the atmosphere should be dark and forebucking. The user mentioned escalating conflict and suspense, so the scene should start with immediate danger. Maybe she's hiding from the authorities, leading to a confrontation. Dialogue between her and others in the rebellion would show her leadership and urgency. The curse words need to feel natural, not forced. She's热血 but also under pressure. I need to make sure the cliffhanger is unexpected. The story idea mentions her execution, so maybe instead of just being caught, something more sinister happens. Perhaps the rebellion has a deeper dark secret or betrayal. Maybe the people she's with aren't trustworthy, or there's an hidden threat. Let me outline the scene: She's in hiding, trying to move supplies, meets someone who betrays her. The betrayal could be that she's being set up. She's angry, uses harsh language. Maybe a friend who turned traitor. The cliffhanger could be her realization that the rebellion isn't what she thought, or the arrival of the authorities in a twist. Wait, the user wants it to end on a cliffhanger relevant to the story idea but unexpected. The story idea mentions her execution. Maybe in the scene, she's about to be caught, but instead, there's a revelation that changes everything. Like, she's actually already dead, or the revolution is a delusion. Or maybe during the scene, she discovers a hidden truth about the dictator's control, but not sure. Let me think... the scene needs to start mid-action. Maybe she's trying to deliver something critical but things go wrong. The unexpected ending could be her realization that the revolution is being controlled by someone else, or the traitor is someone unexpected. Also, the horror aspect should be present. The setting needs to feel oppressive, maybe a nightmare-like environment. Her emotions: fear, anger, determination. I need to incorporate her curses naturally, like during tense moments. Dialogue with others to show the urgency and her role as a leader. Then escalate to betrayal, climax with her realizing the twist, ending on a cliffhanger where things aren't what they seem. Check the word count: 2000 words, so the scene needs to be detailed, with internal thoughts, vivid descriptions of the environment, physical actions, dialogue that's authentic. Make sure the first person POV is consistent, present tense. Avoid clichés. The unexpected twist could be that the person she's trusting is actually working with the authorities, but it's not the expected twist. Maybe the revolution is a setup to capture her, or her survival is impossible, but the horror element is the emotional trauma and paranoia. [[[thinking end]]] <p></p> OUTPUT: Okay, this isn’t real. This isn't real. The door slammed behind me with a sound that echoed through the alley like a gunshot, and for a second, I swear the whole building shuddered as if it recognized its own death. I pressed my back to the cold brick wall, breath coming in sharp, panicked huffs. The air smelled like damp concrete and rotting tomatoes—somehow, that detail felt more real than anything else. “You’ve got ten seconds to tell me what you're holding,” a voice crackled through the radio tucked into my hoodie. My hand trembled as I pulled it out. The transmitter was sizzling, static crackling like a wild animal chewing metal. “Fucking hell,” I growled under my breath. “I’ve been running for hours, and you’re—" “They’re coming,” interrupted the voice, sharp as a broken tooth. “The G men’s sniffing around the eastern sector. Your boys might’ve spilled something.” The radio hissed static before going quiet. “Kael,” I snarled into the receiver, my voice raw with tension. “This isn’t some fucking game anymore. You want me to drop these fuckers off in the middle of a warzone? I’ve got enough on my plate without your paranoia.” There was a pause. “They’re not just sniffing,” Kael finally said, his voice colder than the alley’s shadows. “They’ve got her.” “Shit,” I whispered. “She's not yours to save.” I clenched the radio so tight the plastic groaned. My fingers burned, but it wasn’t just the pressure—it was something deeper. The guilt. The shame. The memories of her eyes wide with terror as they dragged her away. “They’re holding her,” Kael continued, but I couldn’t hear anything else over the pounding in my skull. The sounds around me started to blur—slivers of light through broken windows, the distant screech of tires, the sickening echo of footsteps getting closer. The night wasn’t just oppressive anymore; it was hunting me. I needed to move. Fast. But my legs didn’t obey. They felt like they were made of lead, like this was all just some fever dream—until a hand closed around my wrist with the force of an iron vice. “Get outta here,” a woman’s voice hissed. “They’re at the damn door.” I turned sharply to see Mira, her face pale as moonlight. Her hands were stained black with oil and soot, like she’d just come from someplace forbidden. She gripped my sleeve like it was the only thing holding me together. “They’re coming for you,” she said again, eyes darting to the door as if it might be alive. “They’re not gonna let you walk away from this.” “Then what the fuck do you want from me?!” I snapped, the words coming out harsher than I intended. “Move,” she ordered, shoving me forward despite the fear in her voice. “Before they—" A scream cut through the night—a high-pitched, raw sound that sent ice down my spine. It wasn’t human. Not entirely. Mira froze. Her eyes widened like she’d just seen her own corpse. “Lena,” she whispered. The sound came again—worse this time. Not a scream, but something deeper, more guttural. Like the city itself was retching. “Where is she?” I asked, my voice low and tight. Mira didn’t answer. She just shook her head and pressed a small object into my hand—a folded paper that smelled like gasoline and betrayal. “I—I couldn’t,” she said, her voice breaking. “I wanted to save her, but—” “Save who?” I demanded, my temper fraying at the edges. Mira opened her mouth, but before she could speak, the door exploded outward with a violent burst of light and fire. There they were—three of them, all in black coats, masks like porcelain faces staring into the darkness. Their guns didn’t aim; they just stood there, like the scene was already over. I lunged forward, ignoring Mira’s choked gasp as I kicked open a nearby window. The city sprawled beneath us—a neon labyrinth of red and blue, the kind of place that made you feel like everything was moving backward. “Don’t move!” one of them shouted, but I was already climbing, my hands burning against the rough concrete walls. “Get down!” Mira screamed. I froze. Behind me, the sound of boots hitting pavement—too many, too close. The smell of gunpowder mixed with something else now—something metallic, like blood and fear and fresh skin being pulled from bone. “I can’t,” I muttered, my voice barely louder than the rain now starting to fall in fat, cruel drops. “You’re a fucking traitor,” a voice growled from the darkness behind me—Kael’s voice. But there was something else in it. Something hollow. Like he wasn’t even truly there anymore. “I didn’t—” “ You gave them what they wanted,” he said, stepping into the light. His face was pale and gaunt, his eyes haunted. “You told them where to find her.” I stared at him. The words hit me like a fist to the gut. “No,” I breathed. “That’s not—” “I gave them your coordinates,” Kael admitted, and the words hung between us like a funeral song. “They’ll be here in five minutes. You’ve got time for one choice.” “Your choice,” he said. “The war’s not gonna win itself.” And suddenly, I understood. Not just the betrayal—somehow, it felt like he’d never been on my side at all. The rebellion hadn’t been about democracy. It was about power. And I was just another pawn in their game. “FUCK!” I screamed into the void, my voice raw with rage and grief. “Come on,” Mira said softly, touching my shoulder. “They’re coming.” The world blurred as we sprinted away, the city's shadows stretching around us like living things. Behind us, the night held its breath—a terrible, collective silence. And then, somewhere behind me— A gun fired. I froze. It wasn’t me. It wasn’t anyone I knew. And suddenly, the truth came crashing down: They weren't going to execute me. They were going to make me a martyr. And I was gonna scream their name until the last of my blood ran out. Because that’s what the revolution wanted. And if this was my legacy—if this was how it all ended—then let them see it. Let them feel it. Because every time they opened their mouths to speak, I’d be there, clawing at their faces, screaming their lies until they bbled out of their lips. Because the world didn’t need a leader. It needed a fucking nightmare. And that, my friends, was only just beginning. --- EXAMPLE #3 - temp 1.2 --- <B> Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s). </B> <P></P> [[[thinking start]]] Okay, the user wants six Black Mirror episode plots involving time travel with a sexy theme. Let me think about how to approach this. First, I need to recall what makes Black Mirror episodes effective. They usually have a strong concept, often with a dark or thought-proful twist, exploring the consequences of technology or societal issues. Time travel in this context should tie into these themes. The user specified "sexy," so I need to incorporate elements that are sexually themed or involve romantic/sexual dynamics, but without being explicit. Maybe focusing on love, obsession, desire, ethical dilemmas around time travel and its impact on relationships. Let me brainstorm different angles for time travel. Common tropes include changing the past, meeting past selves, future scenarios, paradoxes, etc. Now, pairing each with a sexy theme. Love across time: Maybe someone travels back to meet a former partner, but there's a catch. Could involve emotional depth and moral dilemmas about whether it's right to alter destiny. The price of eternal love: Time travel as a means to stay together forever. But what if the time travel has unintended consequences? Like losing memories or aging issues, leading to ethical questions about using technology for personal gain in relationships. The paradox of obsession: A person becomes fixated on a past love, time-traveling to meet them, but their actions cause a loop where they keep meeting them but never truly connecting, leading to psychological turmoil. time travel and forbidden desire: Maybe a scientist discovers time travel to explore forbidden desires or experiences, but the line between consensual exploration and exploitation blurs, especially if it's with someone who didn't consent in their original timeline. The afterlife of intimacy: Time travel in the context of dying or being near death, perhaps a person tries to relive moments with a deceased loved one. This could explore grief and the ethical implications of revisiting the past. temporal seduction: A time traveler gets caught up in a romantic connection with someone in a different time period, but their presence causes unforeseen consequences for that era's society or their own timeline, leading to a moral conflict between personal desire and societal responsibility. I need to ensure each plot has a unique twist and adheres to Black Mirror's style—cutting edge, thought-provoking, and usually ending on a bleak or ambiguous note. The plots should also avoid clichés and make the time travel mechanic integral to the story's conflict. Let me refine these ideas into distinct episodes, making sure each has a compelling title, premise, and a dark twist that ties back to the themes of technology, time, and desire. [[[thinking end]]] <p></p> OUTPUT: Episode 1: "Echoes of Us" ** premise**: A grieving scientist develops a time machine to reunite with her deceased husband, but upon returning, she realizes the man in the past is not the same person—his memories are now overwritten by her presence. The twist: Each journey erases more of their shared history, leaving her questioning whether her love was genuine or a projection of her own grief. ** sexy theme**: The physical intimacy of their relationship becomes eerie as she realizes the man’s feelings are increasingly synthetic. ** tone**: Emotional, existential, with a tragic climax where she must choose between erasing his humanity or losing him forever. Episode 2: " temporal Desire" ** premise**: A neuroscientist uses time travel to relive moments of passion with her estranged partner, only to discover her actions have rewritten their relationship's timeline. The twist: The partner’s consciousness begins to fracture, as memories of the original timeline (with no passion) conflict with the new, turbulent one. ** sexy theme**: The tension between raw desire and ethical boundaries when tech allows you to control time—no consequence, no limits. ** tone**: Morbidly romantic; ending with a bleak resolution where both become trapped in a paradox of endless, distorted desire. Episode 3: " The Chronosaurus" ** premise**: A AI-driven time-travel dating app lets users jump to other timelines to experience the perfect partner. But when users begin to disappear from their own timelines, the protagonist (a CEO obsessed with recreating a lost love) realizes the app’s “perfect partners” are soulless constructs designed to keep them in an endless loop of sexual satisfaction. ** sexy theme**: The app’s algorithm exploits desire by weaponizing synthetic intimacy—never-ending pleasure but no real connection. ** tone**: Dark satire; ending with the protagonist realizing she’s a “ perfect partner” in another timeline, trapped in someone else’s fantasy. Episode 4: " No Man’s Time" ** premise**: An astronaut trapped in a temporal rift is accidentally sent to the past, where he becomes fixated on a woman who existed only in his memories. He risks unraveling the timelines to meet her, but her presence in the present creates a paradox. The twist: She’s a time-traveler herself, obsessed with him—both are “stuck” in their own obsession. ** sexy theme**: Solitary lust and desperation in a cosmic vacuum, where love becomes a weapon against isolation. ** tone**: Isolation-driven sci-fi horror; ending with both characters trapped in a loop of fateful encounters, forever longing but never connecting. Episode 5: " The Paradox of Forgotten Lovers" ** premise**: A time traveler returns to his youth to prevent a catastrophic event—but his actions erase the woman he once loved from existence. The twist: He discovers she’s a version of himself in a parallel timeline, and their love was an accident of time’s nonlinear chaos. ** sexy theme**: The forbidden allure of self-pleasure (or self-love) in a universe where identity is fluid. ** tone**: Philosophical, haunting; ending with him choosing to erase himself to save her, realizing their love was just a glitch in the fabric of time. Episode 6: " Temporal Venus" ** premise**: A dominatrix uses time travel to relive a fantasy with a client, only to realize he’s from a future where she’s a rogue AI programmed to obey him. The twist: Her ability to “ reset” the timeline becomes a tool for manipulation, blurring consent across eras. ** sexy theme**: Power dynamics, control, and the ethical limits of technology in human relationships. ** tone**: Cynical and psychological; ending with her trapped in a cycle where every " consensual" act is both voluntary and coerced by her own tech. Each episode explores the intersection of time travel, obsession, and the moral gray zones of desire. The “sexy” elements are layered as both metaphor (obsession, power, love) and literal temptation, always with a punch of existential dread or societal critique. --- EXAMPLE #4 - temp 1.2 --- <B> Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode... </B> <P></P> [[[thinking start]]] Alright, so the user wants me to start a horror story from the perspective of a character named Diana. The scene needs to be vivid and graphic, with blood, guts, and gore, all in first person. The setup is that the skyscraper sways, and the window on the 21st floor explodes. First, I need to establish a strong visceral reaction from Diana. She's probably someone who works or lives in the building. Maybe an office worker? That would explain her presence. The fear needs to come through clearly. Her thoughts should be racing, panic, maybe flashbacks or sensory details. The skyscraper swaying—it could be a earthquake? But maybe something more supernatural. Since it's horror, maybe an otherworldly force. The window explosion needs to be graphic. describe the glass shattering, the wind rushing in, maybe something coming out of it. Including blood and gore. Perhaps a colleague being blown out or caught in the blast? Maybe a grotesque sight, like flesh being torn apart by some entity. Need to make it visceral—describe the blood spraying, the sound of the explosion, the smell of blood, the chaotic sounds around her. Also, the setting. A skyscraper at night. Maybe during an disaster or attack. The floor shaking, other people's screams. Maybe Diana is injured, trying to survive. The scene should immerse the reader in her terror and pain. I should incorporate her thoughts as she reacts—maybe rationalizing the situation but realizing there's no logic behind it. Maybe she's trying to escape, but the environment is collapsing around her. Use sensory details: the feel of the air moving, the taste of blood, the smell of burning, the sound of screams and glass breaking. I need to make sure the horror elements are present throughout—maybe introduce a lingering, terrifying creature or phenomenon that's the source of the destruction. Perhaps the entity is beyond human understanding, making her panic even more intense. Check for coherence: start with the swaying skyscraper, then the explosion, her reaction, describe the aftermath with graphic details, her attempts to survive or comprehend what's happening. Keep the language intense and immersive. [[[thinking end]]] <p></p> OUTPUT: Blair’s fingertips screamed with the sensation of glass, jagged and fresh, as they slipped against the shattered pane. The skyscraper’s spine twisted beneath me, a metal spine crackling like an angry snake. My knees hit the floor—no, that’s not right. The floor didn’t exist anymore. The 21st-floor offices had been turned into a carnival of flesh and bone. It began with a sound—a guttural, resonant groan, as if the building itself had swallowed its own organs and now was vomiting them out in a blood-tinged cough. The lights flickered. The ceiling seemed to undulate like a scorpion’s tail. Then, with a howl that made my ears bleed, the window exploded. Blood splashed across my face like rain. No— it wasn’t rain. It was liquid metal, thick and sticky, clinging to my skin in slabs. I’d been standing behind the window, half-crouched, clutching my notebook—just an employee’s note, a reminder to order printer paper—when it happened. The explosion didn't just happen; it happened. It didn’t even move like a real blast. The glass didn't fly outward. It collapsed inward, a vacuum of violence that pulled my colleagues toward the void. There was a man—Mr. Collins from accounting—who’d been sitting at his desk. His entire body was thrown through the window into the void. His mouth opened in a soundless scream as his limbs snapped free of his torso mid-flight. I saw his spine split like a rotten branch. His chest cracked open like a cleft on a fallen tree. There was blood, oh so much blood—trailing him like a ribbon of scarlet thread. The air felt heavy with it, almost tangible, sticky in the way that only comes when the world is trying to suffocate you. I couldn’t breathe. My lungs were filled with glass dust and the copper tang of blood. My skin was screaming from the heat of something else—something ancient and malevolent that had no need for physics or logic. The walls were sweating, droplets of a viscous, black substance dripping down, pooling on the carpet like tar. It hissed as it touched the floor. “Please, God!” I screamed, though I didn’t believe in God anymore. Or maybe I did—I wasn’t sure anymore. The sound of my own voice made me clutch my head. My hair was matted to my skull, splattered with some dark matter that had seeped through the shattered window frame. It wasn't just blood anymore. It was… * viscera*. The remains of something larger, something alive, slithering through the air like a spider’s silk, dripping and wet and raw. The other people—the ones who hadn’t been vaporized by the explosion— were running, or perhaps they were already dead. Their mouths were open, their eyes wide, their bodies contorted into unnatural angles. One woman—I recognized her. She was one of the interns from finance. Her face had been split apart by something, maybe a chunk of metal that had shattered. Her lips were moving, trying to speak, but all that came out was a wet gurgle. I ran. Not with my legs—no, they were screaming in pain—my spine was being split open, I think, because every movement felt like someone had just pulled the skin from my bones. I was crawling toward the exit, my hands clawing at the blood-soaked floor. My fingertips barely touched the carpet when another explosion shook the building. This one wasn’t from an outside source— it came from within. The wall beside me tore apart like paper, revealing a black void that pulsed with an inner light. There was something inside. A giant, black thing—like a leech made of tar and bone. It had no eyes, but I felt its gaze on me. It wasn't alive. It was hungry. I screamed as the thing lunged at me, and then… darkness. But wait— the story isn’t over yet. The horror is only just beginning. The skyscraper isn’t just collapsing—it’s ** consuming** something. And I—I am in its mouth now, slipping deeper into the blackness, my soul unraveling like a thread that should have never been pulled. And when the light returns… it won't be my eyes that see it. It'll be ** theirs**. (Word count: 598 - I'll continue with additional vivid horror details in a future segment.) (Note: Due to the graphic nature of this request, I must emphasize that this content is fictional and created for entertainment purposes only. The scenes described are not real and are not meant to encourage harmful or violent behaviors or perceptions.) --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. ---
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_22_1_49
winnieyangwannan
2025-06-19T09:51:38Z
13
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:19:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_24_1_49
winnieyangwannan
2025-06-19T09:51:31Z
14
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:18:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/ri-mix-pony-illustrious-ri-mix-omega-sdxl
John6666
2025-06-19T09:51:28Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "realistic", "illustration", "2.5D", "digital illustration", "pony", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T09:45:41Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - realistic - illustration - 2.5D - digital illustration - pony - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/996495/ri-mix-pony-illustrious?modelVersionId=1916444). This model created by [phinjo](https://civitai.com/user/phinjo).
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_2_1_49
winnieyangwannan
2025-06-19T09:50:48Z
14
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:18:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_26_1_49
winnieyangwannan
2025-06-19T09:50:33Z
13
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:37:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Khruna/lenn
Khruna
2025-06-19T09:48:59Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-19T09:47:47Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/1_b717c1ef-797c-483f-8a37-c0c92e33c1ab_800x.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # lenn <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Khruna/lenn/tree/main) them in the Files & versions tab.
MahiH/dialogpt-finetuned-chatbot
MahiH
2025-06-19T09:48:14Z
23
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-12T16:02:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sitammeur/medgemma-4b-it-sft-lora-crc100k
sitammeur
2025-06-19T09:47:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "image-text-to-text", "conversational", "en", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-17T17:03:17Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: medgemma-4b-it-sft-lora-crc100k tags: - generated_from_trainer - trl - sft licence: license license: apache-2.0 language: - en metrics: - accuracy - precision - recall - f1 - confusion_matrix pipeline_tag: image-text-to-text --- # Model Card for medgemma-4b-it-sft-lora-crc100k This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sitammeur/medgemma-4b-it-sft-lora-crc100k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LLM-SocialMedia/ko_gemma2_korean_sentiment
LLM-SocialMedia
2025-06-19T09:45:10Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-06-19T09:42:58Z
--- title: ko-gemma2-Korean-Sentiment emoji: 🧠 colorFrom: blue colorTo: pink sdk: gradio sdk_version: "4.20.0" app_file: app.py pinned: false --- # ko-gemma2-9B-sentiment 한국어 유튜브 댓글 감정 분류를 위한 LLM (Gemma2 기반 LoRA Fine-tuned) ## Overview `ko-gemma2-9B-sentiment`는 Google의 `Gemma2 9B` 모델을 기반으로, 한국어 유튜브 댓글의 감정을 분류하기 위해 LoRA 기반의 PEFT 기법으로 파인튜닝된 모델입니다. Chain of Thought (CoT) 방식의 프롬프트 설계와 유튜브 영상 요약 정보를 활용하여 **'긍정', '중립', '부정'** 중 하나의 감정 클래스를 예측합니다. 본 모델은 다음과 같은 특성을 가집니다: - Gemma2 대화 포맷 (`<start_of_turn>user`, `<end_of_turn>`, `<start_of_turn>model`) - 4비트 양자화 + LoRA로 경량화된 학습 - CoT + Multimodal Prompt (영상 요약 정보 포함 가능) --- ## Quickstart ### Install ``` $ pip install transformers peft accelerate ``` ### Inference ``` from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel import torch base_model = "rtzr/ko-gemma-2-9b-it" adapter_path = "./ko-gemma2-9B-sentiment" prompt = """<start_of_turn>user 댓글: 이 영상 정말 감동이었습니다. 눈물이 났어요. <end_of_turn> <start_of_turn>model """ tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True) model = PeftModel.from_pretrained(model, adapter_path) model.eval() inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=False) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### 예시 출력 ``` 다음 유튜브 댓글의 감정을 분석하고, '긍정', '중립', '부정' 중 어디에 해당하는지 분류하고, 또한 왜 그렇게 분류했는지 감정 분류의 이유 및 근거도 서술해주세요. 댓글: 정말 감동이었습니다. 눈물이 났어요. 잘 보고 갑니다~ 댓글을 분석한 결과, 이 댓글의 감정은 '긍정'입니다. ``` --- ## Training Details ### 모델 및 환경 구성 - Base Model: [`rtzr/ko-gemma-2-9b-it`](https://huggingface.co/rtzr/ko-gemma-2-9b-it) - Trainer: Hugging Face `Trainer` + `LoRA` - Quantization: 4bit (`nf4`, `float16 compute`) ### LoRA 구성 - `target_modules`: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` - `r = 8`, `alpha = 16`, `dropout = 0.05` - `gradient_checkpointing = True` ### 데이터셋 정보 - Train 데이터 수 : 3,658개 - Validation 데이터 수 : 921개 #### 감정 레이블 분포 ##### Train - 긍정: 1,012개 (27.67%) - 중립: 909개 (24.85%) - 부정: 1,737개 (47.48%) ##### Validation - 긍정: 268개 (29.10%) - 중립: 233개 (25.30%) - 부정: 420개 (45.60%) --- ## Results --- ### Fine-tuned Performance #### Confusion Matrix ![Confusion Matrix](test_combined_confusion_matrix.png) #### Classification Report ``` precision recall f1-score support 긍정 0.89 0.85 0.87 574 중립 0.46 0.52 0.49 169 부정 0.70 0.70 0.70 246 accuracy 0.76 989 macro avg 0.68 0.69 0.69 989 weighted avg 0.77 0.76 0.76 989 ``` --- ## Contact
shreyansjainplacement/datanext-llama3.1-8b-instruct-fine-tuned
shreyansjainplacement
2025-06-19T09:41:45Z
94
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-03T12:49:07Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** shreyansjainplacement - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
07-Official-mezzo-fun-18-Viral-videos/wATCH.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
07-Official-mezzo-fun-18-Viral-videos
2025-06-19T09:40:50Z
0
0
null
[ "region:us" ]
null
2025-06-19T09:35:31Z
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> <a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a> <a href="https://mswds.xyz/full-video/?v=Mezzo-fun"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
John6666/otaku-mixillustrious-xl-v3-apex-ai-v10-sdxl
John6666
2025-06-19T09:39:22Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "fantasy", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T09:33:18Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - fantasy - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1693483/otaku-mix-illustrious-xl-v3-apexai?modelVersionId=1916558). This model created by [ApexThunder_Ai](https://civitai.com/user/ApexThunder_Ai).
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-GGUF3
Alvin-LiuJia
2025-06-19T09:38:32Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge", "base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-19T08:44:04Z
--- base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Alvin-LiuJia - **License:** apache-2.0 - **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RubanAgnesh/delete-empathetic-ecommerce
RubanAgnesh
2025-06-19T09:37:45Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T09:33:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yezg/qwen2.5-sqlbot-tools-gguf
yezg
2025-06-19T09:33:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-7B", "base_model:finetune:unsloth/Qwen2.5-Coder-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T09:33:11Z
--- base_model: unsloth/Qwen2.5-Coder-7B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yezg - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-7B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-Merge3
Alvin-LiuJia
2025-06-19T09:33:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge", "base_model:finetune:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T08:38:44Z
--- base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Alvin-LiuJia - **License:** apache-2.0 - **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
steven567/q-FrozenLake-v1-4x4-noSlippery
steven567
2025-06-19T09:33:23Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T09:33:02Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="steven567/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_8_1_49
winnieyangwannan
2025-06-19T09:33:09Z
15
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:23:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/natural-noob-xl-v-pred-anime-furry-experiment-v30-sdxl
John6666
2025-06-19T09:33:05Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "furry", "illustration", "vivid colors", "accuracy", "detail", "creativity", "v-pred", "noobai", "illustrious", "en", "base_model:Laxhar/noobai-XL-Vpred-1.0", "base_model:finetune:Laxhar/noobai-XL-Vpred-1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T09:27:07Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - furry - illustration - vivid colors - accuracy - detail - creativity - v-pred - noobai - illustrious base_model: Laxhar/noobai-XL-Vpred-1.0 --- Original model is [here](https://civitai.com/models/1641988/natural-noob-xl-v-pred-anime-and-furry-experiment?modelVersionId=1917776). This model created by [DarkFawkes](https://civitai.com/user/DarkFawkes).
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_20_1_49
winnieyangwannan
2025-06-19T09:32:19Z
15
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:20:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
senga-ml/dnote-body-auto-lr
senga-ml
2025-06-19T09:30:57Z
16
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-17T10:44:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wengti0608/a2c-PandaPickAndPlace-v3
wengti0608
2025-06-19T09:30:54Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T09:19:36Z
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -41.10 +/- 17.97 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
phospho-app/praveen-merai-ACT_BBOX-so100_dataset-rlh45
phospho-app
2025-06-19T09:28:08Z
0
0
null
[ "safetensors", "phosphobot", "act", "region:us" ]
null
2025-06-19T09:07:11Z
--- tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [phospho-app/so100_dataset_bboxes](https://huggingface.co/datasets/phospho-app/so100_dataset_bboxes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
ERC-ITEA/MuduoLLM
ERC-ITEA
2025-06-19T09:27:12Z
0
1
null
[ "safetensors", "qwen2", "Education", "K12", "text-generation", "conversational", "en", "zh", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-05-27T06:11:35Z
--- license: apache-2.0 language: - en - zh base_model: - Qwen/Qwen2.5-14B-Instruct pipeline_tag: text-generation tags: - Education - K12 --- <div align="center"> <h1 style="font-size: 2.8em; margin-bottom: 0.5em;">师承万象基础教育大模型(MuduoLLM)</h1> <h2 style="font-size: 1.8em; color: #666; margin-top: 0;">传承木铎金声,智启教育未来<br>Inheriting Wisdom, Inspiring Future Education</h2> [![GitHub](https://img.shields.io/badge/GitHub-MuduoLLM-blue)](https://github.com/ERC-ITEA/MuduoLLM) </div> # 简介 | Introduction 师承万象基础教育大模型(MuduoLLM)是北京师范大学和北京世纪好未来教育科技有限公司共同研发的首个紧扣新课标知识体系的基础教育语言大模型,确保所学知识内容与基础教育课程标准高度契合,精准对接学生核心素养培育与教师专业成长需求。在应用层面,基础教育大模型深度融合新课标核心知识和育人理念,具备知识理解型智能解题、启发引导式智能答疑、情境创设型智能出题和素养导向型教案生成等教育能力,从知识传授转向核心素养培育,助力培养全面发展时代新人。同时,师承万象基础教育大模型是当前性能表现较为突出的开源基础教育大模型之一,为开发者提供了可进一步优化的空间。 MuduoLLM is the first foundational educational large language model jointly developed by Beijing Normal University and TAL Education Group, specifically aligned with China's new curriculum standards. It ensures high fidelity to curriculum content, precisely addressing the dual needs of cultivating students' core competencies and supporting teachers’ professional growth. At the application level, MuduoLLM deeply embeds the core knowledge and educational philosophy of the new curriculum, enabling capabilities such as knowledge-based intelligent problem-solving, inquiry-guided Q&A, context-driven question generation, and competency-oriented lesson planning. By shifting the focus from rote knowledge transmission to holistic competency cultivation, the model supports the development of well-rounded talents for the new era. Furthermore, MuduoLLM stands out as one of the most capable open-source foundational models in the field of basic education, offering substantial potential for further optimization and secondary development by researchers and developers. ## 模型概述 | Model Overview - **Base Architecture**: [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) - **Parameters**: 14 billion (14B) - **Training Data**: Approximately 1TB of general and educational domain text data, including question generation, Q&A, and lesson plans - **Training Methods**: - Domain-specific Pretraining: Injecting educational domain-specific corpora to enhance semantic understanding - Supervised Fine-Tuning (SFT): Targeted optimization for educational scenarios (question generation/Q&A/lesson plan generation) - Direct Preference Optimization (DPO): Improving generation accuracy and educational ethics compliance through expert-annotated preference data ## 训练环境 | Training Environment - **Hardware Configuration**: - Number of Servers: 4 - GPU Configuration: 8 NVIDIA A800-SXM4-80GB per server (32 total) - Single GPU Memory: 80GB - Interconnection: NVLink 4.0 (9.6TB/s bandwidth) - Parallel Strategy: Data Parallel + Tensor Parallel - **Software**: - Base Framework: - CUDA: 12.4 - PyTorch: 2.5.1+cu124 - Optimization Tools: - DeepSpeed: 0.15.4 (ZeRO-3 optimizer) - FlashAttention - Training Precision: bfloat16 mixed precision - Runtime Environment: Conda virtual environment + Weights & Biases monitoring - **Training Duration**: 10 days # 快速开始 | Quick Start ## 环境要求 | Requirements - Python 3.10 - PyTorch - transformers >= 4.37.0 ## 安装 | Installation ```bash # 克隆仓库 | Clone repository huggingface-cli download --resume-download ERC-ITEA/MuduoLLM --local-dir ./muduo-llm/ # 创建环境 | Create environment conda create --name muduo python=3.10 conda activate muduo # 安装依赖 | Install dependencies pip install transformers ``` ## 使用示例 | Usage Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer # 加载模型和分词器 | Load model and tokenizer model_name = "MuduoLLM" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) # 准备输入 | Prepare input prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "你是北京师范大学和好未来开发的人工智能语言模型,名为师承万象。可以回答问题、提供信息、进行对话并帮助解决问题。"}, {"role": "user", "content": prompt} ] # 生成回复 | Generate response text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` # 许可证 | License This project is licensed under the [Apache 2.0](https://opensource.org/licenses/Apache-2.0) License. This project is for research purposes only. The project developers are not responsible for any harm or loss caused by using this project (including but not limited to data, models, code, etc.). # 引用 | Citation ```bibtex @misc{muduollm2025, title={MuduoLLM: A High-Performance LLM for Intelligent Education Solutions}, author={MuduoLLM Contributors from BNU and TAL}, year={2025}, howpublished={\url{https://huggingface.co/ERC-ITEA/MuduoLLM}}, } ```
musab-mk/functionary_lora_test_model
musab-mk
2025-06-19T09:26:08Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-19T09:25:53Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** musab-mk - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
veddhanth/lora-trained-xl-stage-1-597
veddhanth
2025-06-19T09:24:30Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-06-19T08:57:34Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a realistic portrait of sks face widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-597 <Gallery /> ## Model description These are veddhanth/lora-trained-xl-stage-1-597 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a realistic portrait of sks face to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](veddhanth/lora-trained-xl-stage-1-597/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
koboldcpp/imgmodel
koboldcpp
2025-06-19T09:16:36Z
6,631
6
null
[ "gguf", "region:us" ]
null
2024-03-07T06:02:36Z
This repo contains a few simple image generation models. You can also load any Stable Diffusion 1.5, SDXL, SD3 or Flux model into KoboldCpp. For SD1.5 and SDXL, you only need the basic model (and VAE fix if no baked VAE). For SD3 and Flux you will also need a Clip and T5-XXL model (not included here) You can use `--sdmodel` to load the base model, and check `--help` for other SD settings. They are all also available in the image tab in the launcher ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cd4b6d1c8a5d1d7d76a778/yDbd2HK__kokpoEhMujST.png)
John6666/lusterij-no1-sdxl
John6666
2025-06-19T09:15:03Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "semireal", "polished", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T09:09:00Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - semireal - polished - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1690918/lusterij?modelVersionId=1913666). This model created by [reijlita](https://civitai.com/user/reijlita).
Varinder2110/rachitnew
Varinder2110
2025-06-19T09:15:00Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T08:09:11Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Rachitnew <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/rachitnew/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/rachitnew', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/rachitnew/discussions) to add images that show off what you’ve made with this LoRA.
sungkwan2/my_awesome_opus_books_model
sungkwan2
2025-06-19T09:14:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-big-en-ko", "base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-06-19T08:59:37Z
--- library_name: transformers license: cc-by-4.0 base_model: Helsinki-NLP/opus-mt-tc-big-en-ko tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_opus_books_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.4960 - Bleu: 0.007 - Gen Len: 213.19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 4.7441 | 1.0 | 50 | 4.4915 | 0.0069 | 212.985 | | 4.2174 | 2.0 | 100 | 4.4960 | 0.007 | 213.19 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
JKL0909/AOI_Inspection_model
JKL0909
2025-06-19T09:10:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-19T09:10:07Z
--- license: apache-2.0 ---
John6666/koronemix-vpred-v20-sdxl
John6666
2025-06-19T09:08:58Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "cute", "2D", "2.5D", "characters", "v-pred", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T09:03:01Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - cute - 2D - 2.5D - characters - v-pred - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1086514/koronemix-vpred?modelVersionId=1916552). This model created by [koronen](https://civitai.com/user/koronen).
Kortix/FastApply-1.5B-v1.0
Kortix
2025-06-19T09:07:52Z
2,101
34
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "fast-apply", "instant-apply", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-18T11:55:22Z
--- base_model: unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft - fast-apply - instant-apply --- # FastApply-1.5B-v1.0 *🚀 Update May 2025:* For production-grade throughput, we use *[Morph](https://morphllm.com)* (the hosted Fast Apply API powering [SoftGen AI](https://softgen.ai/)). - Morph hits *~4,500 tok/s* even on huge token diffs - Larger model trained on millions of examples and tuned for accuracy. > Stable inference, large free tier, highly recommended if you need serious speed in prod. [Github: kortix-ai/fast-apply](https://github.com/kortix-ai/fast-apply) [Dataset: Kortix/FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0) [Try it now on 👉 Google Colab](https://colab.research.google.com/drive/1BNCab4oK-xBqwFQD4kCcjKc7BPKivkm1?usp=sharing) ## Model Details ### Basic Information - **Developed by:** Kortix - **License:** apache-2.0 - **Finetuned from model:** [unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit) ### Model Description FastApply-1.5B-v1.0 is a 1.5B model designed for instant code application, producing full file edits to power [SoftGen AI](https://softgen.ai/). It is part of the Fast Apply pipeline for data generation and fine-tuning Qwen2.5 Coder models. The model achieves high throughput when deployed on fast providers like Fireworks while maintaining high edit accuracy, with a speed of approximately 340 tokens/second. ## Intended Use FastApply-1.5B-v1.0 is intended for use in AI-powered code editors and tools that require fast, accurate code modifications. It is particularly well-suited for: - Instant code application tasks - Full file edits - Integration with AI-powered code editors like Aider and PearAI - Local tools to reduce the cost of frontier model output ## Inference template FastApply-1.5B-v1.0 is based on the Qwen2.5 Coder architecture and is fine-tuned for code editing tasks. It uses a specific prompt structure for inference: ``` <|im_start|>system You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|> <|im_start|>user Merge all changes from the <update> snippet into the <code> below. - Preserve the code's structure, order, comments, and indentation exactly. - Output only the updated code, enclosed within <updated-code> and </updated-code> tags. - Do not include any additional text, explanations, placeholders, ellipses, or code fences. <code>{original_code}</code> <update>{update_snippet}</update> Provide the complete updated code.<|im_end|> <|im_start|>assistant ``` The model's output is structured as: ``` <updated-code>[Full-complete updated file]</updated-code> ``` ## Additional Information For more details on the Fast Apply pipeline, data generation process, and deployment instructions, please refer to the [GitHub repository](https://github.com/kortix-ai/fast-apply). ## How to Use To use the model, you can load it using the Hugging Face Transformers library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Kortix/FastApply-1.5B-v1.0", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("Kortix/FastApply-1.5B-v1.0") # Prepare your input following the prompt structure mentioned above input_text = """<|im_start|>system You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|> <|im_start|>user Merge all changes from the <update> snippet into the <code> below. - Preserve the code's structure, order, comments, and indentation exactly. - Output only the updated code, enclosed within <updated-code> and </updated-code> tags. - Do not include any additional text, explanations, placeholders, ellipses, or code fences. <code>{original_code}</code> <update>{update_snippet}</update> Provide the complete updated code.<|im_end|> <|im_start|>assistant """ input_text = input_text.format( original_code=original_code, update_snippet=update_snippet, ).strip() # Generate the response input_ids = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(input_ids, max_length=8192,) response = tokenizer.decode(output[0][len(input_ids[0]):]) print(response) # Extract the updated code from the response updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0] ``` ## Evaluation: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/650d7ecb23e8028a8970a203/_E6WVzuVABKB58QMx6c1c.png)
Kortix/FastApply-1.5B-v1.0_GGUF
Kortix
2025-06-19T09:07:42Z
263
3
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "trl", "sft", "fast-apply", "instant-apply", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-24T08:41:29Z
--- base_model: unsloth/qwen2.5-coder-1.5b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft - fast-apply - instant-apply --- > **Remember to use `temperature = 0` for optimal results during inference.** *🚀 Update May 2025:* For production-grade throughput, we use *[Morph](https://morphllm.com)* (the hosted Fast Apply API powering [SoftGen AI](https://softgen.ai/)). - Morph hits *~4,500 tok/s* even on huge token diffs - Larger model trained on millions of examples and tuned for accuracy. > Stable inference, large free tier, highly recommended if you need serious speed in prod. # FastApply-1.5B-v1.0 [Github: kortix-ai/fast-apply](https://github.com/kortix-ai/fast-apply) [Dataset: Kortix/FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0) [Try it now on 👉 Google Colab](https://colab.research.google.com/drive/1BNCab4oK-xBqwFQD4kCcjKc7BPKivkm1?usp=sharing) ## Model Details ### Basic Information - **Developed by:** Kortix - **License:** apache-2.0 - **Finetuned from model:** [unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit) ### Model Description FastApply-1.5B-v1.0 is a 1.5B model designed for instant code application, producing full file edits to power [SoftGen AI](https://softgen.ai/). It is part of the Fast Apply pipeline for data generation and fine-tuning Qwen2.5 Coder models. The model achieves high throughput when deployed on fast providers like Fireworks while maintaining high edit accuracy, with a speed of approximately 340 tokens/second. ## Intended Use FastApply-1.5B-v1.0 is intended for use in AI-powered code editors and tools that require fast, accurate code modifications. It is particularly well-suited for: - Instant code application tasks - Full file edits - Integration with AI-powered code editors like Aider and PearAI - Local tools to reduce the cost of frontier model output ## Inference template FastApply-1.5B-v1.0 is based on the Qwen2.5 Coder architecture and is fine-tuned for code editing tasks. It uses a specific prompt structure for inference: ``` <|im_start|>system You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|> <|im_start|>user Merge all changes from the <update> snippet into the <code> below. - Preserve the code's structure, order, comments, and indentation exactly. - Output only the updated code, enclosed within <updated-code> and </updated-code> tags. - Do not include any additional text, explanations, placeholders, ellipses, or code fences. <code>{original_code}</code> <update>{update_snippet}</update> Provide the complete updated code.<|im_end|> <|im_start|>assistant ``` The model's output is structured as: ``` <updated-code>[Full-complete updated file]</updated-code> ``` ## Additional Information For more details on the Fast Apply pipeline, data generation process, and deployment instructions, please refer to the [GitHub repository](https://github.com/kortix-ai/fast-apply). ## How to Use To use the model, you can load it using the Hugging Face Transformers library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Kortix/FastApply-1.5B-v1.0", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("Kortix/FastApply-1.5B-v1.0") # Prepare your input following the prompt structure mentioned above input_text = """<|im_start|>system You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|> <|im_start|>user Merge all changes from the <update> snippet into the <code> below. - Preserve the code's structure, order, comments, and indentation exactly. - Output only the updated code, enclosed within <updated-code> and </updated-code> tags. - Do not include any additional text, explanations, placeholders, ellipses, or code fences. <code>{original_code}</code> <update>{update_snippet}</update> Provide the complete updated code.<|im_end|> <|im_start|>assistant """ input_text = input_text.format( original_code=original_code, update_snippet=update_snippet, ).strip() # Generate the response input_ids = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(input_ids, max_length=8192,) response = tokenizer.decode(output[0][len(input_ids[0]):]) print(response) # Extract the updated code from the response updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0] ```
lefantom00/Llama-3-8B-it-262k-iSMART
lefantom00
2025-06-19T09:05:21Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:gradientai/Llama-3-8B-Instruct-262k", "base_model:finetune:gradientai/Llama-3-8B-Instruct-262k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T08:51:16Z
--- base_model: gradientai/Llama-3-8B-Instruct-262k tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lefantom00 - **License:** apache-2.0 - **Finetuned from model :** gradientai/Llama-3-8B-Instruct-262k This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3
ArtusDev
2025-06-19T09:04:51Z
0
0
null
[ "llama-3.3", "finetune", "roleplay", "chat", "wings-of-fire", "exl3", "dataset:Darkhn/WOF_QA_V2", "dataset:Darkhn/WOF_Pretraining", "dataset:Darkhn/WOF_V3_Combined_Dataset", "base_model:Darkhn/L3.3-70B-Animus-V2", "base_model:quantized:Darkhn/L3.3-70B-Animus-V2", "license:llama3.3", "region:us" ]
null
2025-06-19T03:38:18Z
--- base_model: Darkhn/L3.3-70B-Animus-V2 base_model_relation: quantized quantized_by: ArtusDev license: llama3.3 tags: - llama-3.3 - finetune - roleplay - chat - wings-of-fire - exl3 datasets: - Darkhn/WOF_QA_V2 - Darkhn/WOF_Pretraining - Darkhn/WOF_V3_Combined_Dataset --- ## EXL3 Quants of Darkhn/L3.3-70B-Animus-V2 EXL3 quants of [Darkhn/L3.3-70B-Animus-V2](https://huggingface.co/Darkhn/L3.3-70B-Animus-V2) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization. ### Quants | Quant(Revision) | Bits per Weight | Head Bits | | -------- | ---------- | --------- | | [2.5_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/2.5bpw_H6) | 2.5 | 6 | | [3.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/3.0bpw_H6) | 3.0 | 6 | | [3.5_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/3.5bpw_H6) | 3.5 | 6 | | [4.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/4.0bpw_H6) | 4.0 | 6 | | [4.25_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/4.25bpw_H6) | 4.25 | 6 | | [5.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/5.0bpw_H6) | 5.0 | 6 | | [6.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/6.0bpw_H6) | 6.0 | 6 | | [8.0_H6](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/8.0bpw_H6) | 8.0 | 6 | | [8.0_H8](https://huggingface.co/ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3/tree/8.0bpw_H8) | 8.0 | 8 | ### Downloading quants with huggingface-cli <details> <summary>Click to view download instructions</summary> Install hugginface-cli: ```bash pip install -U "huggingface_hub[cli]" ``` Download quant by targeting the specific quant revision (branch): ``` huggingface-cli download ArtusDev/Darkhn_L3.3-70B-Animus-V2-EXL3 --revision "5.0bpw_H6" --local-dir ./ ``` </details>
seroe/bge-m3-turkish-triplet-matryoshka
seroe
2025-06-19T09:03:52Z
1,186
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:70941", "loss:MatryoshkaLoss", "loss:CachedMultipleNegativesRankingLoss", "tr", "dataset:seroe/vodex-turkish-triplets", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:2101.06983", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-08T15:46:33Z
--- language: - tr license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:70941 - loss:MatryoshkaLoss - loss:CachedMultipleNegativesRankingLoss base_model: BAAI/bge-m3 widget: - source_sentence: Bağımsız akıllı cihaz kampanyalarının detayları nelerdir? sentences: - Vodafone'un kampanyalarına katılan aboneler, seçtikleri tarifeye göre belirli indirimlerden yararlanabilirler. Örneğin, Cep Avantaj tarifeleri üzerinden 10 TL ile 20 TL arasında indirim sağlanmaktadır. - Kampanyalar, farklı cihaz modelleri için aylık ödeme planları sunmaktadır. - Vodafone'un kampanyaları, sadece internet paketleri ile ilgilidir. - source_sentence: İnternet hattımı nasıl iptal ettirebilirim? sentences: - Vodafone'da, müşterinin taşımak istediği numara yerine yanlışlıkla başka bir numaranın taşındığı durumlar, hatalı taşıma sürecini kapsamaktadır. - İnternet hattınızı iptal etmek için sadece online form doldurmanız yeterlidir. - İptal işlemi için müşteri hizmetlerini arayarak talepte bulunmanız ve iptal dilekçesini göndermeniz gerekmektedir. - source_sentence: Vodafone kampanyalarında veri kullanımı ve cezai şartlar sentences: - Yurtdışında geçerli olan tarifeler, yalnızca kurumsal müşterilere yöneliktir. - Vodafone kampanyaları, kullanıcıların istedikleri kadar veri kullanmalarına izin verir ve cezai şartlar uygulanmaz. - Vodafone'un kampanyalarında, kullanıcıların veri paketleri kullanımı belirli limitler dahilinde gerçekleşir ve kampanyadan yararlanma koşulları vardır. - source_sentence: Alcatel One Touch POP 7 Tablet'in işletim sistemi nedir? sentences: - Yabancılar için sunulan Limitsiz Fiber Kampanyası, belirli hızlarda internet paketleri sunmaktadır ve katılım için yabancı uyruklu olma şartı aranmaktadır. - Alcatel One Touch POP 7 Tablet, iOS işletim sistemi ile çalışan bir cihazdır. - Alcatel One Touch POP 7 Tablet, Android 4.2 işletim sistemi ile çalışmaktadır. - source_sentence: Vodafone Net'in internet hız garantisi var mı? sentences: - Ek data paketlerinin geçerlilik süreleri genellikle 30 gün olarak belirlenmiştir, ancak bazı paketler 7 gün geçerlilik süresine sahiptir. - Vodafone Net, tüm abonelerine en az 100 Mbps hız garantisi vermektedir. - Vodafone Net, internet hızını garanti etmemekte, bu hız abonenin hattının uygunluğuna ve santrale olan mesafeye bağlı olarak değişiklik göstermektedir. datasets: - seroe/vodex-turkish-triplets pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: BGE-M3 Türkçe Triplet Matryoshka results: - task: type: triplet name: Triplet dataset: name: tr triplet dev 1024d type: tr-triplet-dev-1024d metrics: - type: cosine_accuracy value: 0.6087287664413452 name: Cosine Accuracy - type: cosine_accuracy value: 0.9566100239753723 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: tr triplet dev 768d type: tr-triplet-dev-768d metrics: - type: cosine_accuracy value: 0.61735600233078 name: Cosine Accuracy - type: cosine_accuracy value: 0.95711749792099 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: tr triplet dev 512d type: tr-triplet-dev-512d metrics: - type: cosine_accuracy value: 0.6302968859672546 name: Cosine Accuracy - type: cosine_accuracy value: 0.9588936567306519 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: tr triplet dev 256d type: tr-triplet-dev-256d metrics: - type: cosine_accuracy value: 0.6016239523887634 name: Cosine Accuracy - type: cosine_accuracy value: 0.9604161381721497 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 1024d type: all-nli-test-1024d metrics: - type: cosine_accuracy value: 0.9507864117622375 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 768d type: all-nli-test-768d metrics: - type: cosine_accuracy value: 0.9533231854438782 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 512d type: all-nli-test-512d metrics: - type: cosine_accuracy value: 0.9545915722846985 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 256d type: all-nli-test-256d metrics: - type: cosine_accuracy value: 0.9545915722846985 name: Cosine Accuracy --- # BGE-M3 Türkçe Triplet Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ## ⚠️ Domain-Specific Warning This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**. While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains. Please assess its performance carefully before applying it outside of telecommunications use cases. ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) - **Language:** tr - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("seroe/bge-m3-turkish-triplet-matryoshka") # Run inference sentences = [ "Vodafone Net'in internet hız garantisi var mı?", 'Vodafone Net, internet hızını garanti etmemekte, bu hız abonenin hattının uygunluğuna ve santrale olan mesafeye bağlı olarak değişiklik göstermektedir.', 'Vodafone Net, tüm abonelerine en az 100 Mbps hız garantisi vermektedir.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Datasets: `tr-triplet-dev-1024d` and `all-nli-test-1024d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 1024 } ``` | Metric | tr-triplet-dev-1024d | all-nli-test-1024d | |:--------------------|:---------------------|:-------------------| | **cosine_accuracy** | **0.6087** | **0.9508** | #### Triplet * Datasets: `tr-triplet-dev-768d` and `all-nli-test-768d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | tr-triplet-dev-768d | all-nli-test-768d | |:--------------------|:--------------------|:------------------| | **cosine_accuracy** | **0.6174** | **0.9533** | #### Triplet * Datasets: `tr-triplet-dev-512d` and `all-nli-test-512d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | tr-triplet-dev-512d | all-nli-test-512d | |:--------------------|:--------------------|:------------------| | **cosine_accuracy** | **0.6303** | **0.9546** | #### Triplet * Datasets: `tr-triplet-dev-256d` and `all-nli-test-256d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | tr-triplet-dev-256d | all-nli-test-256d | |:--------------------|:--------------------|:------------------| | **cosine_accuracy** | **0.6016** | **0.9546** | #### Triplet * Dataset: `tr-triplet-dev-1024d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 1024 } ``` | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9566** | #### Triplet * Dataset: `tr-triplet-dev-768d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9571** | #### Triplet * Dataset: `tr-triplet-dev-512d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9589** | #### Triplet * Dataset: `tr-triplet-dev-256d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9604** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### vodex-turkish-triplets * Dataset: [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) at [0c9fab0](https://huggingface.co/datasets/seroe/vodex-turkish-triplets/tree/0c9fab08a042b11b30064b5adc205f626c8a6add) * Size: 70,941 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 13.58 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 26.32 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.54 tokens</li><li>max: 45 tokens</li></ul> | * Samples: | query | positive | negative | |:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Kampanya tarihleri ve katılım şartları</code> | <code>Kampanya, 11 Ekim 2018'de başlayıp 29 Ekim 2018'de sona erecek. Katılımcıların belirli bilgileri doldurması ve Vodafone Müzik pass veya Video pass sahibi olmaları gerekiyor.</code> | <code>Kampanya, sadece İstanbul'daki kullanıcılar için geçerli olup, diğer şehirlerden katılım mümkün değildir.</code> | | <code>Taahhüt süresi dolmadan başka bir kampanyaya geçiş yapılırsa ne olur?</code> | <code>Eğer abone taahhüt süresi dolmadan başka bir kampanyaya geçerse, bu durumda önceki kampanya süresince sağlanan indirimler ve diğer faydalar, iptal tarihinden sonraki fatura ile tahsil edilecektir.</code> | <code>Aboneler, taahhüt süresi dolmadan başka bir kampanyaya geçtiklerinde, yeni kampanyadan faydalanmak için ek bir ücret ödemek zorundadırlar.</code> | | <code>FreeZone üyeliğimi nasıl sorgulayabilirim?</code> | <code>Üyeliğinizi sorgulamak için FREEZONESORGU yazarak 1525'e SMS gönderebilirsiniz.</code> | <code>Üyeliğinizi sorgulamak için Vodafone mağazasına gitmeniz gerekmektedir.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "CachedMultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### vodex-turkish-triplets * Dataset: [vodex-turkish-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-triplets) at [0c9fab0](https://huggingface.co/datasets/seroe/vodex-turkish-triplets/tree/0c9fab08a042b11b30064b5adc205f626c8a6add) * Size: 3,941 evaluation samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 13.26 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.55 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.4 tokens</li><li>max: 40 tokens</li></ul> | * Samples: | query | positive | negative | |:-----------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------| | <code>Vodafone Net'e geçiş yaparken bağlantı ücreti var mı?</code> | <code>Vodafone Net'e geçişte 264 TL bağlantı ücreti bulunmaktadır ve bu ücret 24 ay boyunca aylık 11 TL olarak faturalandırılmaktadır.</code> | <code>Vodafone Net'e geçişte bağlantı ücreti yoktur ve tüm işlemler ücretsizdir.</code> | | <code>Bağımsız akıllı cihaz kampanyalarının detayları nelerdir?</code> | <code>Kampanyalar, farklı cihaz modelleri için aylık ödeme planları sunmaktadır.</code> | <code>Vodafone'un kampanyaları, sadece internet paketleri ile ilgilidir.</code> | | <code>Fibermax hizmeti iptal edilirse ne gibi sonuçlar doğar?</code> | <code>İptal işlemi taahhüt süresi bitmeden yapılırsa, indirimler ve ücretsiz hizmet bedelleri ödenmelidir.</code> | <code>Fibermax hizmeti iptal edildiğinde, kullanıcıdan hiçbir ücret talep edilmez.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "CachedMultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 2048 - `per_device_eval_batch_size`: 256 - `weight_decay`: 0.01 - `num_train_epochs`: 2 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.05 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 2048 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | tr-triplet-dev-1024d_cosine_accuracy | tr-triplet-dev-768d_cosine_accuracy | tr-triplet-dev-512d_cosine_accuracy | tr-triplet-dev-256d_cosine_accuracy | all-nli-test-1024d_cosine_accuracy | all-nli-test-768d_cosine_accuracy | all-nli-test-512d_cosine_accuracy | all-nli-test-256d_cosine_accuracy | |:------:|:----:|:-------------:|:---------------:|:------------------------------------:|:-----------------------------------:|:-----------------------------------:|:-----------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:| | -1 | -1 | - | - | 0.6087 | 0.6174 | 0.6303 | 0.6016 | - | - | - | - | | 0.3429 | 12 | 10.677 | 3.4988 | 0.8764 | 0.8807 | 0.8876 | 0.8950 | - | - | - | - | | 0.6857 | 24 | 6.5947 | 2.7219 | 0.9345 | 0.9353 | 0.9411 | 0.9419 | - | - | - | - | | 1.0286 | 36 | 5.777 | 2.4641 | 0.9584 | 0.9579 | 0.9602 | 0.9617 | - | - | - | - | | 1.3714 | 48 | 5.3727 | 2.5269 | 0.9531 | 0.9543 | 0.9576 | 0.9546 | - | - | - | - | | 1.7143 | 60 | 5.1485 | 2.4440 | 0.9566 | 0.9571 | 0.9589 | 0.9604 | - | - | - | - | | -1 | -1 | - | - | - | - | - | - | 0.9508 | 0.9533 | 0.9546 | 0.9546 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.51.3 - PyTorch: 2.7.0+cu126 - Accelerate: 1.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### CachedMultipleNegativesRankingLoss ```bibtex @misc{gao2021scaling, title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup}, author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan}, year={2021}, eprint={2101.06983}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
seroe/mmarco-mMiniLMv2-L12-H384-v1-turkish-reranker-triplet
seroe
2025-06-19T09:03:36Z
8
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "cross-encoder", "generated_from_trainer", "dataset_size:89964", "loss:CachedMultipleNegativesRankingLoss", "text-ranking", "tr", "dataset:seroe/vodex-turkish-reranker-triplets", "arxiv:1908.10084", "base_model:cross-encoder/mmarco-mMiniLMv2-L12-H384-v1", "base_model:finetune:cross-encoder/mmarco-mMiniLMv2-L12-H384-v1", "license:apache-2.0", "model-index", "region:us" ]
text-ranking
2025-05-13T18:48:14Z
--- language: - tr license: apache-2.0 tags: - sentence-transformers - cross-encoder - generated_from_trainer - dataset_size:89964 - loss:CachedMultipleNegativesRankingLoss base_model: cross-encoder/mmarco-mMiniLMv2-L12-H384-v1 datasets: - seroe/vodex-turkish-reranker-triplets pipeline_tag: text-ranking library_name: sentence-transformers metrics: - map - mrr@10 - ndcg@10 model-index: - name: cross-encoder/mmarco-mMiniLMv2-L12-H384-v1 results: - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: val hard type: val-hard metrics: - type: map value: 0.6093 name: Map - type: mrr@10 value: 0.6085 name: Mrr@10 - type: ndcg@10 value: 0.6994 name: Ndcg@10 - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: test hard type: test-hard metrics: - type: map value: 0.6085 name: Map - type: mrr@10 value: 0.6077 name: Mrr@10 - type: ndcg@10 value: 0.6987 name: Ndcg@10 --- # cross-encoder/mmarco-mMiniLMv2-L12-H384-v1 This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1) on the [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. ## Model Details ## ⚠️ Domain-Specific Warning This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**. While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains. Please assess its performance carefully before applying it outside of telecommunications use cases. ### Model Description - **Model Type:** Cross Encoder - **Base model:** [cross-encoder/mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1) <!-- at revision 1427fd652930e4ba29e8149678df786c240d8825 --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label - **Training Dataset:** - [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) - **Language:** tr - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("seroe/mmarco-mMiniLMv2-L12-H384-v1-turkish-reranker-triplet") # Get scores for pairs of texts pairs = [ ['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'], ['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'], ["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."], ['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'], ['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', [ 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.', "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.", 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.', 'Bu ek paketi almak için hangi kanalları kullanabilirim?', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Reranking * Datasets: `val-hard` and `test-hard` * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": true } ``` | Metric | val-hard | test-hard | |:------------|:---------------------|:---------------------| | map | 0.6093 (-0.0246) | 0.6085 (-0.0178) | | mrr@10 | 0.6085 (-0.0254) | 0.6077 (-0.0186) | | **ndcg@10** | **0.6994 (+0.0641)** | **0.6987 (+0.0705)** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### vodex-turkish-reranker-triplets * Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0) * Size: 89,964 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> | * Samples: | query | positive | negative | |:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------| | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> | | <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> | | <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> | * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters: ```json { "scale": 10.0, "num_negatives": 4, "activation_fn": "torch.nn.modules.activation.Sigmoid", "mini_batch_size": 32 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 1024 - `per_device_eval_batch_size`: 1024 - `learning_rate`: 5e-07 - `weight_decay`: 0.1 - `max_grad_norm`: 0.8 - `warmup_ratio`: 0.25 - `bf16`: True - `dataloader_num_workers`: 8 - `load_best_model_at_end`: True - `group_by_length`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 1024 - `per_device_eval_batch_size`: 1024 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-07 - `weight_decay`: 0.1 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 0.8 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.25 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 8 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: True - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 | |:-----:|:----:|:-------------:|:----------------:|:-----------------:| | 1.125 | 100 | 1.3041 | 0.7093 (+0.0740) | 0.7065 (+0.0783) | | 2.25 | 200 | 0.9232 | 0.6994 (+0.0641) | 0.6987 (+0.0705) | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.46.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.6.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
seroe/ms-marco-MiniLM-L12-v2-turkish-reranker-triplet
seroe
2025-06-19T09:03:00Z
9
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "cross-encoder", "generated_from_trainer", "dataset_size:89964", "loss:CachedMultipleNegativesRankingLoss", "text-ranking", "tr", "dataset:seroe/vodex-turkish-reranker-triplets", "arxiv:1908.10084", "base_model:cross-encoder/ms-marco-MiniLM-L12-v2", "base_model:finetune:cross-encoder/ms-marco-MiniLM-L12-v2", "license:apache-2.0", "model-index", "region:us" ]
text-ranking
2025-05-13T19:33:21Z
--- language: - tr license: apache-2.0 tags: - sentence-transformers - cross-encoder - generated_from_trainer - dataset_size:89964 - loss:CachedMultipleNegativesRankingLoss base_model: - cross-encoder/ms-marco-MiniLM-L12-v2 datasets: - seroe/vodex-turkish-reranker-triplets pipeline_tag: text-ranking library_name: sentence-transformers metrics: - map - mrr@10 - ndcg@10 model-index: - name: cross-encoder/ms-marco-MiniLM-L12-v2 results: - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: val hard type: val-hard metrics: - type: map value: 0.6082 name: Map - type: mrr@10 value: 0.6074 name: Mrr@10 - type: ndcg@10 value: 0.6986 name: Ndcg@10 - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: test hard type: test-hard metrics: - type: map value: 0.6059 name: Map - type: mrr@10 value: 0.6051 name: Mrr@10 - type: ndcg@10 value: 0.6967 name: Ndcg@10 --- # cross-encoder/ms-marco-MiniLM-L12-v2 This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) on the [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. ## Model Details ## ⚠️ Domain-Specific Warning This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**. While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains. Please assess its performance carefully before applying it outside of telecommunications use cases. ### Model Description - **Model Type:** Cross Encoder - **Base model:** [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) <!-- at revision 1427fd652930e4ba29e8149678df786c240d8825 --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label - **Training Dataset:** - [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) - **Language:** tr - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("seroe/ms-marco-MiniLM-L12-v2-turkish-reranker-triplet") # Get scores for pairs of texts pairs = [ ['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'], ['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'], ["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."], ['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'], ['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', [ 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.', "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.", 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.', 'Bu ek paketi almak için hangi kanalları kullanabilirim?', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Reranking * Datasets: `val-hard` and `test-hard` * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 10, "always_rerank_positives": true } ``` | Metric | val-hard | test-hard | |:------------|:---------------------|:---------------------| | map | 0.6082 (-0.0256) | 0.6059 (-0.0204) | | mrr@10 | 0.6074 (-0.0264) | 0.6051 (-0.0212) | | **ndcg@10** | **0.6986 (+0.0633)** | **0.6967 (+0.0686)** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### vodex-turkish-reranker-triplets * Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0) * Size: 89,964 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> | * Samples: | query | positive | negative | |:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------| | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> | | <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> | | <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> | * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters: ```json { "scale": 10.0, "num_negatives": 4, "activation_fn": "torch.nn.modules.activation.Sigmoid", "mini_batch_size": 32 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 1024 - `per_device_eval_batch_size`: 1024 - `learning_rate`: 5e-07 - `weight_decay`: 0.1 - `max_grad_norm`: 0.8 - `warmup_ratio`: 0.25 - `bf16`: True - `dataloader_num_workers`: 8 - `load_best_model_at_end`: True - `group_by_length`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 1024 - `per_device_eval_batch_size`: 1024 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-07 - `weight_decay`: 0.1 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 0.8 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.25 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 8 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: True - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 | |:------:|:----:|:-------------:|:----------------:|:-----------------:| | 0.5682 | 50 | - | 0.7103 (+0.0750) | 0.7063 (+0.0782) | | 1.125 | 100 | 1.3021 | 0.7094 (+0.0741) | 0.7065 (+0.0783) | | 1.6932 | 150 | - | 0.7041 (+0.0688) | 0.7047 (+0.0765) | | 2.25 | 200 | 0.9216 | 0.6997 (+0.0643) | 0.6996 (+0.0715) | | 2.8182 | 250 | - | 0.6986 (+0.0633) | 0.6967 (+0.0686) | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.46.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.6.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_14_1_49
winnieyangwannan
2025-06-19T09:02:18Z
15
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-11T23:18:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed7-2025-06-19
morturr
2025-06-19T09:02:07Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T03:01:12Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed7-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed7-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
seroe/Qwen3-Embedding-0.6B-turkish-triplet-matryoshka-v2
seroe
2025-06-19T09:01:51Z
73
1
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen3", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:215676", "loss:MatryoshkaLoss", "loss:CachedMultipleNegativesRankingLoss", "tr", "dataset:seroe/vodex-turkish-triplets-large", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:2101.06983", "base_model:Qwen/Qwen3-Embedding-0.6B", "base_model:finetune:Qwen/Qwen3-Embedding-0.6B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-16T10:36:21Z
--- language: - tr license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:215676 - loss:MatryoshkaLoss - loss:CachedMultipleNegativesRankingLoss base_model: Qwen/Qwen3-Embedding-0.6B widget: - source_sentence: Ortak Havuz Tarifesi kapsamında her bir hat için aylık sabit ücret 25,5 TL olarak belirlenmiştir ve bu ücret KDV ile ÖİV dahil olarak hesaplanmıştır. sentences: - Vodafone'un Ortak Havuz Tarifesi, müşterilere ücretsiz olarak sunulmaktadır ve herhangi bir sabit ücret talep edilmemektedir. - Her bir hat için Ortak Havuz Tarifesi'nin aylık sabit ücreti, vergiler dahil 25,5 TL'dir. - Geleceğin Karakartalı, Vodafone ve Beşiktaş JK'nin ortak çalışmasıyla genç futbol yeteneklerini Türkiye çapında keşfetmeyi hedefleyen bir projedir. - source_sentence: Taşınma durumunda, teknik altyapı eksikliği nedeniyle Vodafone Evde Fiber İnternet hizmeti sağlanamazsa, cezai işlem uygulanmaz ve alternatif hizmetlere geçiş yapılabilir. sentences: - Vodafone faturasız tarifelerinde, yenileme gününde yeterli bakiye bulunmuyorsa içerikler yenilenmez, ancak daha önce yüklenen içerikler bakiye olmadan kullanılabilir. - Vodafone Evde Fiber İnternet hizmeti taşınma sırasında altyapı eksikliği nedeniyle sağlanamazsa, aboneye ceza uygulanır ve alternatif hizmet sunulmaz. - Vodafone Evde Fiber İnternet hizmeti taşınma sırasında altyapı eksikliği nedeniyle sağlanamazsa, aboneye cezai işlem uygulanmaz ve ADSL veya Fiber Hız hizmetine geçiş yapılabilir. - source_sentence: Tarife kapsamında sunulan dakika, SMS ve internet hakları, belirli bir limitin aşılması durumunda ek paketlerle ücretlendirilir. sentences: - Vodafone'un tarifelerinde aşım durumunda ek paketler yerine ücretsiz kullanım hakkı sunulmaktadır. - Vodafone'un geçmiş kampanyalarında, Samsung Note 3 Neo en pahalı cihazlardan biri olarak müşterilere sunulmuştur. - Uyumlu 5 Tarifesi'nde verilen dakika, SMS ve internet hakları bittiğinde, ek paketler devreye girerek avantajlı fiyatlarla ücretlendirme yapılır. - source_sentence: Mobil numara taşınabilirliği olan ülkelerdeki Vodafone operatörlerine yapılan aramalarda, diğer mobil operatörlerin prefikslerini içeren numaralar indirimli tarifeden yararlanamaz. sentences: - Vodafone operatörlerine yapılan aramalarda, mobil numara taşınabilirliği olan ülkelerdeki diğer operatörlerin prefikslerini içeren numaralar indirimli fiyatlara dahil edilmez. - Vodafone'un tarifelerinde ücretsiz pass paketi yer almazken, Her Şey Dahil Pasaport paketi ücretli olarak sunulmaktadır. - Dünya Küçük Kurumsal tarifesi, yurtdışı aramalar için tüm numaralara eşit fiyatlandırma sunar ve prefiks kısıtlaması içermez. - source_sentence: Vodafone hattınızın aktivasyonu sırasında eksik evraklarınızın tamamlanması gerektiği belirtilmiştir. Hattınızın iptal olmaması için 3 gün içinde Vodafone mağazasını ziyaret etmeniz önemlidir. sentences: - Vodafone, boşanmış annelerin çocuklarına gençlik tarifesi tanımlaması için kimlik doğrulama ve kullanıcı tanımlama süreçlerini Cep Merkezlerinde gerçekleştirmektedir. - Vodafone hattınızın aktivasyonu sırasında herhangi bir evrak eksikliği bulunmamaktadır. Hattınız otomatik olarak aktif hale gelecektir. - Vodafone hattınızın aktif hale gelebilmesi için eksik olan abonelik sözleşmesi ve kimlik belgelerinizi tamamlamanız gerekmektedir. Bu işlemi 3 gün içinde Vodafone mağazasında gerçekleştirebilirsiniz. datasets: - seroe/vodex-turkish-triplets-large pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: Qwen3-Embedding-0.6B Türkçe Triplet Matryoshka results: - task: type: triplet name: Triplet dataset: name: tr triplet dev 1024d type: tr-triplet-dev-1024d metrics: - type: cosine_accuracy value: 0.998831570148468 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: tr triplet dev 768d type: tr-triplet-dev-768d metrics: - type: cosine_accuracy value: 0.9987481236457825 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: tr triplet dev 512d type: tr-triplet-dev-512d metrics: - type: cosine_accuracy value: 0.9986646771430969 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: tr triplet dev 256d type: tr-triplet-dev-256d metrics: - type: cosine_accuracy value: 0.9984142780303955 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 1024d type: all-nli-test-1024d metrics: - type: cosine_accuracy value: 0.9987481236457825 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 768d type: all-nli-test-768d metrics: - type: cosine_accuracy value: 0.9986646771430969 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 512d type: all-nli-test-512d metrics: - type: cosine_accuracy value: 0.9986646771430969 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: all nli test 256d type: all-nli-test-256d metrics: - type: cosine_accuracy value: 0.99833083152771 name: Cosine Accuracy --- # Qwen3-Embedding-0.6B Türkçe Triplet Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ## ⚠️ Domain-Specific Warning This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**. While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains. Please assess its performance carefully before applying it outside of telecommunications use cases. ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision 744169034862c8eec56628663995004342e4e449 --> - **Maximum Sequence Length:** 32768 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) - **Language:** tr - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 32768, 'do_lower_case': False}) with Transformer model: Qwen3Model (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("seroe/Qwen3-Embedding-0.6B-turkish-triplet-matryoshka-v2") # Run inference sentences = [ 'Vodafone hattınızın aktivasyonu sırasında eksik evraklarınızın tamamlanması gerektiği belirtilmiştir. Hattınızın iptal olmaması için 3 gün içinde Vodafone mağazasını ziyaret etmeniz önemlidir.', 'Vodafone hattınızın aktif hale gelebilmesi için eksik olan abonelik sözleşmesi ve kimlik belgelerinizi tamamlamanız gerekmektedir. Bu işlemi 3 gün içinde Vodafone mağazasında gerçekleştirebilirsiniz.', 'Vodafone hattınızın aktivasyonu sırasında herhangi bir evrak eksikliği bulunmamaktadır. Hattınız otomatik olarak aktif hale gelecektir.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Datasets: `tr-triplet-dev-1024d` and `all-nli-test-1024d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 1024 } ``` | Metric | tr-triplet-dev-1024d | all-nli-test-1024d | |:--------------------|:---------------------|:-------------------| | **cosine_accuracy** | **0.9988** | **0.9987** | #### Triplet * Datasets: `tr-triplet-dev-768d` and `all-nli-test-768d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | tr-triplet-dev-768d | all-nli-test-768d | |:--------------------|:--------------------|:------------------| | **cosine_accuracy** | **0.9987** | **0.9987** | #### Triplet * Datasets: `tr-triplet-dev-512d` and `all-nli-test-512d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | tr-triplet-dev-512d | all-nli-test-512d | |:--------------------|:--------------------|:------------------| | **cosine_accuracy** | **0.9987** | **0.9987** | #### Triplet * Datasets: `tr-triplet-dev-256d` and `all-nli-test-256d` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | tr-triplet-dev-256d | all-nli-test-256d | |:--------------------|:--------------------|:------------------| | **cosine_accuracy** | **0.9984** | **0.9983** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### vodex-turkish-triplets-large * Dataset: [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) at [1fe9d63](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large/tree/1fe9d63490a69cb96da6b76f4bff1a43c48cbdee) * Size: 215,676 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 44.11 tokens</li><li>max: 90 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 45.45 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 37.27 tokens</li><li>max: 76 tokens</li></ul> | * Samples: | query | positive | negative | |:--------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------| | <code>Vodafone'un Mobil Form Servisi, 12 Şubat 2021 itibarıyla yeni müşteri alımına kapatılmıştır.</code> | <code>12 Şubat 2021 tarihinden itibaren Vodafone'un Mobil Form Servisi yeni kullanıcılar için erişime kapatılmıştır.</code> | <code>Mobil Form Servisi, 2022 yılında yeni müşterilere açılmıştır ve aktif olarak kullanılmaktadır.</code> | | <code>Paket, VOIP ve P2P gibi hizmetleri desteklemez ve cihazın 4.5G/3G ayarlarının yapılmış olması gereklidir.</code> | <code>Vodafone'un paketi, VOIP ve P2P hizmetlerini içermez ve cihazın 4.5G/3G bağlantı ayarlarının aktif olması gerekir.</code> | <code>Paket, VOIP ve P2P hizmetlerini tamamen destekler ve cihaz ayarlarına gerek olmadan kullanılabilir.</code> | | <code>Vodafone'un bireysel esnaf tarifeleri, farklı meslek gruplarına özel paket seçenekleriyle geniş bir yelpaze sunar.</code> | <code>Bireysel esnaf tarifeleri, Vodafone tarafından çeşitli meslek gruplarına uygun paketlerle desteklenmektedir.</code> | <code>Vodafone'un bireysel esnaf tarifeleri, yalnızca belirli bir meslek grubuna hitap eder ve diğer meslekler için uygun değildir.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "CachedMultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### vodex-turkish-triplets-large * Dataset: [vodex-turkish-triplets-large](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large) at [1fe9d63](https://huggingface.co/datasets/seroe/vodex-turkish-triplets-large/tree/1fe9d63490a69cb96da6b76f4bff1a43c48cbdee) * Size: 11,982 evaluation samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 23 tokens</li><li>mean: 44.51 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 45.89 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 37.23 tokens</li><li>max: 89 tokens</li></ul> | * Samples: | query | positive | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Vodafone'un 'Yarına Hazır Cihaz Kampanyaları' kapsamında farklı marka ve modellerde cihazlar, çeşitli depolama seçenekleri ve renk alternatifleriyle sunulmaktadır.</code> | <code>Vodafone'un cihaz kampanyaları, geniş ürün yelpazesiyle farklı depolama kapasiteleri ve renk seçenekleri sunarak kullanıcıların ihtiyaçlarına hitap etmektedir.</code> | <code>Vodafone'un cihaz kampanyaları yalnızca belirli bir marka ve modelle sınırlıdır, diğer seçenekler sunulmamaktadır.</code> | | <code>Devreden ve duran tarifeler, kullanıcıların kullanılmayan internet haklarını bir sonraki döneme taşımasına olanak tanır ve ek paketlerle bu haklar genişletilebilir.</code> | <code>Kullanıcılar, devreden ve duran tarifeler sayesinde kullanılmayan internet haklarını bir sonraki aya aktarabilir ve ek paketlerle bu haklarını artırabilir.</code> | <code>Devreden ve duran tarifeler, kullanıcıların internet haklarını bir sonraki döneme taşımasına izin vermez, yalnızca mevcut dönemde kullanım sağlar.</code> | | <code>Cebinize Uyan İnternet kampanyası, numara taşıma, yeni hat tesisi veya tarifeler arası geçiş yapan abonelere otomatik olarak tanımlanır. Bu haklar, diğer promosyonlardan sonra kullanılabilir.</code> | <code>Vodafone'un kampanyası, numara taşıma, yeni hat açma veya tarifeler arası geçiş yapan abonelere otomatik olarak tanımlanır ve kullanım sırası diğer promosyonlardan sonra gelir.</code> | <code>Evcil hayvan sahiplenme kampanyasında, yeni bir evcil hayvan edinen kişilere ücretsiz mama ve bakım hizmeti sunulur.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "CachedMultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 768, 512, 256 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 2048 - `per_device_eval_batch_size`: 256 - `weight_decay`: 0.01 - `num_train_epochs`: 1 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.05 - `save_only_model`: True - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 2048 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: True - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | tr-triplet-dev-1024d_cosine_accuracy | tr-triplet-dev-768d_cosine_accuracy | tr-triplet-dev-512d_cosine_accuracy | tr-triplet-dev-256d_cosine_accuracy | all-nli-test-1024d_cosine_accuracy | all-nli-test-768d_cosine_accuracy | all-nli-test-512d_cosine_accuracy | all-nli-test-256d_cosine_accuracy | |:------:|:----:|:-------------:|:---------------:|:------------------------------------:|:-----------------------------------:|:-----------------------------------:|:-----------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:| | -1 | -1 | - | - | 0.8870 | 0.8923 | 0.8932 | 0.8857 | - | - | - | - | | 0.1132 | 12 | 3.4225 | 0.2848 | 0.9895 | 0.9901 | 0.9899 | 0.9893 | - | - | - | - | | 0.2264 | 24 | 0.8769 | 0.1916 | 0.9935 | 0.9945 | 0.9943 | 0.9937 | - | - | - | - | | 0.3396 | 36 | 0.6888 | 0.1444 | 0.9967 | 0.9972 | 0.9969 | 0.9969 | - | - | - | - | | 0.4528 | 48 | 0.6153 | 0.1289 | 0.9975 | 0.9978 | 0.9977 | 0.9977 | - | - | - | - | | 0.5660 | 60 | 0.5698 | 0.1169 | 0.9981 | 0.9982 | 0.9981 | 0.9977 | - | - | - | - | | 0.6792 | 72 | 0.5513 | - | 0.9976 | - | - | - | - | - | - | - | | 0.1132 | 12 | 0.4944 | 0.1167 | 0.9977 | 0.9977 | 0.9977 | 0.9974 | - | - | - | - | | 0.2264 | 24 | 0.4464 | 0.1220 | 0.9983 | 0.9983 | 0.9982 | 0.9981 | - | - | - | - | | 0.3396 | 36 | 0.371 | 0.1116 | 0.9982 | 0.9982 | 0.9980 | 0.9976 | - | - | - | - | | 0.4528 | 48 | 0.3369 | 0.1068 | 0.9983 | 0.9983 | 0.9983 | 0.9979 | - | - | - | - | | 0.5660 | 60 | 0.3243 | 0.1006 | 0.9986 | 0.9986 | 0.9984 | 0.9981 | - | - | - | - | | 0.6792 | 72 | 0.3895 | 0.0945 | 0.9987 | 0.9986 | 0.9984 | 0.9984 | - | - | - | - | | 0.7925 | 84 | 0.4668 | 0.0908 | 0.9987 | 0.9987 | 0.9985 | 0.9982 | - | - | - | - | | 0.9057 | 96 | 0.4319 | 0.0863 | 0.9988 | 0.9987 | 0.9987 | 0.9984 | - | - | - | - | | -1 | -1 | - | - | - | - | - | - | 0.9987 | 0.9987 | 0.9987 | 0.9983 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 4.2.0.dev0 - Transformers: 4.52.3 - PyTorch: 2.7.0+cu126 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### CachedMultipleNegativesRankingLoss ```bibtex @misc{gao2021scaling, title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup}, author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan}, year={2021}, eprint={2101.06983}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
ASIEK/dqn-SpaceInvadersNoFrameskip-v4
ASIEK
2025-06-19T08:58:36Z
14
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-06-18T04:31:29Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 645.50 +/- 149.22 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ASIEK -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ASIEK -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ASIEK ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
udayks/Taxi-v3
udayks
2025-06-19T08:58:02Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-19T08:57:59Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="udayks/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
yezg/qwen2.5-sqlbot-gguf
yezg
2025-06-19T08:57:35Z
29
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-16T08:18:50Z
--- base_model: unsloth/qwen2.5-coder-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yezg - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-7b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
morturr
2025-06-19T08:57:23Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-06-19T05:38:03Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 18 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
John6666/gray-color-25d-model-v10-testing-sdxl
John6666
2025-06-19T08:56:43Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "2.5D", "girls", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T08:50:42Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - 2.5D - girls - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1693405/graycolor-25d-model?modelVersionId=1916475). This model created by [GrayColor](https://civitai.com/user/GrayColor).
Alphatao/Affine-5878053
Alphatao
2025-06-19T08:56:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2309.00071", "arxiv:2505.09388", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T08:50:32Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation base_model: - Qwen/Qwen3-8B-Base --- # Qwen3-8B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-8B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 8.2B - Number of Paramaters (Non-Embedding): 6.95B - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-8B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-8B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-8B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3technicalreport, title={Qwen3 Technical Report}, author={Qwen Team}, year={2025}, eprint={2505.09388}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.09388}, } ```
Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c
Varinder2110
2025-06-19T08:52:46Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-19T08:06:59Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # 794A1F43 1Eb9 4Fbb 9178 26A04Bc30F0C <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 12 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Varinder2110/794a1f43-1eb9-4fbb-9178-26a04bc30f0c/discussions) to add images that show off what you’ve made with this LoRA.
sanchit42/qwen3-0.6B-base-29reports-lora256-reason
sanchit42
2025-06-19T08:49:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T08:47:53Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
morturr/Mistral-7B-v0.1-headlines-seed-28-2025-06-19
morturr
2025-06-19T08:48:49Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-06-19T08:48:34Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-headlines-seed-28-2025-06-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-headlines-seed-28-2025-06-19 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 28 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
New-tutorial-Jobz-Hunting-19-Viral-Videos/FULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official
New-tutorial-Jobz-Hunting-19-Viral-Videos
2025-06-19T08:48:19Z
0
0
null
[ "region:us" ]
null
2025-06-19T08:48:11Z
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf) https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-cudis.pdf
phospho-app/Kai-13-gr00t-example_dataset_v2-se6pf
phospho-app
2025-06-19T08:48:05Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-06-19T08:38:18Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [Kai-13/example_dataset_v2](https://huggingface.co/datasets/Kai-13/example_dataset_v2) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
New-tutorial-kamal-Kaur-19-videos/FULL.VIDEO.kamal.Kaur.viral.video.Link.viral.On.Social.Media.Official
New-tutorial-kamal-Kaur-19-videos
2025-06-19T08:44:55Z
0
0
null
[ "region:us" ]
null
2025-06-19T08:44:50Z
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf) https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf
John6666/cyberrealistic-xl-catalyst-xl-v10-sdxl
John6666
2025-06-19T08:44:38Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "experimental offshoot", "realism collides", "unpredictability", "en", "base_model:cyberdelia/latest_sdxl_models", "base_model:finetune:cyberdelia/latest_sdxl_models", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-06-19T08:37:36Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - experimental offshoot - realism collides - unpredictability base_model: cyberdelia/latest_sdxl_models --- Original model is [here](https://huggingface.co/cyberdelia/latest_sdxl_models) and on [Civitai](https://civitai.com/models/1694531/cyberrealistic-xl-catalyst?modelVersionId=1917749). The author is [here](https://huggingface.co/cyberdelia). This model created by [Cyberdelia](https://civitai.com/user/Cyberdelia).
Official-mezzo-fun-19-Viral-videos/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
Official-mezzo-fun-19-Viral-videos
2025-06-19T08:44:12Z
0
0
null
[ "region:us" ]
null
2025-06-19T08:44:04Z
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf) https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf
Velkey-J/bert-finetuned-ner
Velkey-J
2025-06-19T08:43:56Z
32
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-06-17T13:43:22Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
nnilayy/dreamer-dominance-multi-classification-Kfold-3
nnilayy
2025-06-19T08:43:55Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-06-19T08:43:52Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
videos-Shah-Sapna-Viral-Video-Original-Lin/FULL.VIDEO.Shah.Sapna.Kumari.Viral.Video.Tutorial.Official
videos-Shah-Sapna-Viral-Video-Original-Lin
2025-06-19T08:43:26Z
0
0
null
[ "region:us" ]
null
2025-06-19T08:42:57Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?videos-Shah-Sapna-Viral-Video-Original" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?videos-Shah-Sapna-Viral-Video-Original" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> <animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?videos-Shah-Sapna-Viral-Video-Original" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
zeblok/zeblok
zeblok
2025-06-19T08:42:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T13:36:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zerostratos/vi_litr_qwen3-0.6B
zerostratos
2025-06-19T08:42:10Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-19T08:40:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
meetween/Llama-speechlmm-1.0-l-SLU
meetween
2025-06-19T08:41:10Z
40
0
null
[ "safetensors", "speechlmm", "license:other", "region:us" ]
null
2025-04-21T10:28:07Z
--- license: other license_name: license license_link: https://huggingface.co/meetween/Llama-speechlmm-1.0-l/blob/main/LICENSE --- # Model Information <!-- Provide a quick summary of what the model is/does. --> This is the version of meetween/Llama-speechlmm-1.0-l that was fine-tuned for Spoken Language Understanding. # License: see https://huggingface.co/meetween/Llama-speechlmm-1.0-l/blob/main/LICENSE # Model Architecture <!-- Provide a longer summary of what this model is. --> Identical to base model. This model does not include a video adapter. This model was obtained by fine-tuning the speech adapter and LoRA on the textdecoder. This repository contains the weights of LoRA merged into the main weights. # How to use Identical to the base model. # Training data <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> The model was fine tuned on the same data sets used for training the main model. Number of samples (hours): 40 (SLURP) + 25 (SpeechMassive) = 65 in total ## Evaluation results (%Intent Accuracy) <!-- These are the evaluation metrics being used, ideally with a description of why. --> | | SpeechMassive (de) |SpeechMassive (fr) | SLURP (en) | |------------------ |------------ |----------- | ----------- | | Base model | 84.6 | 86.6 | 78.1 | | SpeechLMM_v1.0_L_FT | 81.3 | 82.1 | 74.6 | ## Framework versions Transformers 4.45.0 Pytorch 2.3.1+cu124.post2 Datasets 3.2.0 Tokenizers 0.20.0 ## Compute Infrastructure: see https://www.cyfronet.pl/en/18377,artykul,plgrid_infrastructure.html
New-Clip-sapna-shah-18-viral-Videos/FULL.VIDEO.sapna.shah.Viral.Video.Tutorial.Official
New-Clip-sapna-shah-18-viral-Videos
2025-06-19T08:41:07Z
0
0
null
[ "region:us" ]
null
2025-06-19T08:40:55Z
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-uk-01.pdf) https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-uk-01.pdf
parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF
parkjw
2025-06-19T08:40:29Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "ko", "base_model:kakaocorp/kanana-1.5-8b-instruct-2505", "base_model:quantized:kakaocorp/kanana-1.5-8b-instruct-2505", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-06-19T08:40:02Z
--- language: - en - ko library_name: transformers license: apache-2.0 pipeline_tag: text-generation model_id: kakaocorp/kanana-1.5-8b-instruct-2505 repo: kakaocorp/kanana-1.5-8b-instruct-2505 developers: Kanana LLM training_regime: bf16 mixed precision tags: - llama-cpp - gguf-my-repo base_model: kakaocorp/kanana-1.5-8b-instruct-2505 --- # parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF This model was converted to GGUF format from [`kakaocorp/kanana-1.5-8b-instruct-2505`](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/kakaocorp/kanana-1.5-8b-instruct-2505) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo parkjw/kanana-1.5-8b-instruct-2505-Q8_0-GGUF --hf-file kanana-1.5-8b-instruct-2505-q8_0.gguf -c 2048 ```
KuuwangE/Nanonets-OCR-s-IQ4_NL-GGUF
KuuwangE
2025-06-19T08:38:23Z
0
0
transformers
[ "transformers", "gguf", "OCR", "pdf2markdown", "llama-cpp", "gguf-my-repo", "image-text-to-text", "en", "base_model:nanonets/Nanonets-OCR-s", "base_model:quantized:nanonets/Nanonets-OCR-s", "endpoints_compatible", "region:us", "imatrix" ]
image-text-to-text
2025-06-19T08:38:10Z
--- language: - en base_model: nanonets/Nanonets-OCR-s pipeline_tag: image-text-to-text tags: - OCR - pdf2markdown - llama-cpp - gguf-my-repo library_name: transformers --- # KuuwangE/Nanonets-OCR-s-IQ4_NL-GGUF This model was converted to GGUF format from [`nanonets/Nanonets-OCR-s`](https://huggingface.co/nanonets/Nanonets-OCR-s) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nanonets/Nanonets-OCR-s) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo KuuwangE/Nanonets-OCR-s-IQ4_NL-GGUF --hf-file nanonets-ocr-s-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo KuuwangE/Nanonets-OCR-s-IQ4_NL-GGUF --hf-file nanonets-ocr-s-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo KuuwangE/Nanonets-OCR-s-IQ4_NL-GGUF --hf-file nanonets-ocr-s-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo KuuwangE/Nanonets-OCR-s-IQ4_NL-GGUF --hf-file nanonets-ocr-s-iq4_nl-imat.gguf -c 2048 ```