modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 18:27:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 18:26:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Khruna/Nico
|
Khruna
| 2025-06-19T10:56:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:56:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still002.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# nico
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Nico/tree/main) them in the Files & versions tab.
|
kubiga354/riyan
|
kubiga354
| 2025-06-19T10:55:58Z | 0 | 0 | null |
[
"license:cc-by-nd-4.0",
"region:us"
] | null | 2025-06-19T10:55:58Z |
---
license: cc-by-nd-4.0
---
|
JobixAi/tts-grandpa-v2
|
JobixAi
| 2025-06-19T10:53:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:51:08Z |
## Model Details
- Base model: `orpheus-3b-0.1`
- Data: Mixed dataset including synthetic (from previous finetuned orpheus) and real recordings of grandpa, plus synthetic speeches of existing voices
- LoRA rank 64, 1000 steps
|
leodotnet/Qwen3-4B_pubgmbot_query-v23
|
leodotnet
| 2025-06-19T10:48:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:45:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Videos-jobz-hunting-sajal-malik-17k/18.wATCH.jobz.hunting.sajal.malik.viral.video.original.free
|
Videos-jobz-hunting-sajal-malik-17k
| 2025-06-19T10:37:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:37:46Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-tutorial-guru-salsa-18-Viral-Videos/FULL.VIDEO.guru.salsa.Viral.Video.Tutorial.Official
|
New-tutorial-guru-salsa-18-Viral-Videos
| 2025-06-19T10:37:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:36:39Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Patipon/obi-sapbert-singlelabel
|
Patipon
| 2025-06-19T10:37:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:36:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BatiRocky/dummy-model
|
BatiRocky
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T10:23:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Khieu-dam-o-Viet-Nam-in-Vietnamese/NOI.TIENG.Khieu.dam.o.Viet.Nam.in.Vietnamese
|
Khieu-dam-o-Viet-Nam-in-Vietnamese
| 2025-06-19T10:32:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:32:49Z |
01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
New-tutorial-prajakta-mali-18-Viral-Videos/FULL.VIDEO.prajakta.mali.Viral.Video.Tutorial.Official
|
New-tutorial-prajakta-mali-18-Viral-Videos
| 2025-06-19T10:28:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:28:43Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-Clip-Indian-MMS-18-viral-Videos/FULL.VIDEO.mms.Viral.Video.Tutorial.Official
|
New-Clip-Indian-MMS-18-viral-Videos
| 2025-06-19T10:28:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:26:35Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
godnpeter/qwen25_3B_answeronly
|
godnpeter
| 2025-06-19T10:23:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:21:29Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BoghdadyJR/Qwen_MERGED_final
|
BoghdadyJR
| 2025-06-19T10:17:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-16T20:20:44Z |
---
base_model: unsloth/qwen2-vl-2b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BoghdadyJR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-2b-instruct-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bobby97/step3_3e09915b-5eab-4da6-89cc-1473ba7dfd3b
|
bobby97
| 2025-06-19T10:16:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:53:55Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A black and white image captures faint trails of meteors streaking
across the night sky, surrounded by a few discernible stars. The motion of the meteors
creates long, luminous lines against the dark backdrop, highlighting their rapid
movement through space.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
Flux Fill based Inpainting model
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
tungbt/test-create-model
|
tungbt
| 2025-06-19T10:14:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:14:23Z |
---
license: apache-2.0
---
|
Wiefdw/tax-raft-mistral-7b
|
Wiefdw
| 2025-06-19T10:11:39Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-06-19T07:20:00Z |
# Tax Document RAFT Model with RAG Support
Model ini dilatih menggunakan teknik **RAFT (RAG Fine-Tuning)** dengan dokumen pajak sebagai knowledge base.
Dilengkapi dengan **ChromaDB vector database** untuk kemampuan retrieval dokumen pajak.
## Informasi Tambahan:
- **Model Base**: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- **Teknik Fine-tuning**: RAFT (RAG Fine-Tuning)
- **Embedding Model**: all-MiniLM-L6-v2
- **Vector DB**: ChromaDB
- **Framework**: Unsloth + Transformers + LangChain
|
Khruna/trtt
|
Khruna
| 2025-06-19T10:10:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:09:29Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_02_11.Still001.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# trtt
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/trtt/tree/main) them in the Files & versions tab.
|
Khruna/Dren
|
Khruna
| 2025-06-19T10:07:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:06:41Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Dren
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Dren/tree/main) them in the Files & versions tab.
|
adugeen/authorship-e5-base
|
adugeen
| 2025-06-19T10:07:36Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:276686",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-19T09:48:30Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:276686
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-base
widget:
- source_sentence: 'query: Печенеги отступали. Они могли запросто убить оставшегося
позади князя Владимира, мальчишку и старика, но получили приказ - уходить. Куря
- печенежский хан, проигравший бой с князем, был смертельно ранен.
Воины забрали его и вернулись на прежнее место стоянки, где были оставлены обозы
с провиантом и разбит временный лагерь. Гийяр не отходил от отца ни на шаг, надеясь
не то на чудо, не то на лекарей. Ведь кроме него у мальчишки никого не было. Мать
он потерял ещё совсем крохой.
Куря был властным и жестоким правителем. К сыну относился хорошо, но никогда не
баловал. Для Гийяра отец всегда был идеалом, многие завидовали его решимости и
хитроумию. "Власть держится на страхе" - часто говорил он. Но теперь он находился
между жизнью и смертью. И чёрная чаша весов была намного тяжелее...
Великий хан умер ночью, не приходя в сознание.
Парень понимал, что вынужден стать следующим ханом, ведь по линии отца у него
больше родственников не было. Но это значило одно. Снова совершать набеги на Киев
и Новгород, продолжать дело Кури, а значит - обернуться против своих друзей, поступить
подло по отношению к князю, Алекше и Оленьке. А есть ли у него выбор?
Да. И сейчас это его выбор. Отец не подскажет ему, не поможет, не укажет верный
путь. Да и не послушал бы Гийяр его. Он никогда не сможет стать таким жестоким,
как отец. Но теперь мальчик понимал, откуда тот черпал свою ненависть, злость,
обиду и жажду крови. У него никогда не было настоящего друга, который смог бы
подставить плечо, дать дружеский совет. Несмотря на орды подчинённых ему печенегов
и хазар, отец был одинок. Не ставил никого рядом с собой, унижал, жестоко наказывал
воинов, заслужив тем самым "страх и уважение". Так считал он сам. И такой позиции
придерживались печенеги...
А Гийяр? Алекша спас ему жизнь, не зная, что тот враг. А когда узнал, разве захотел
убить его? Ведь печенеги сожгли его деревню и убили дедушку.
Но Гийяр не мог понять, как вообще такое возможно. То ли христианину так и полагается,
молча переносить все невзгоды, что посылает ему судьба, то ли русский дух настолько
силён, что не может опускаться до мести. Не понимал..., а жажда мести разгоралась
в его душе. Оленька... Храбрая девчушка с чёлкой и длинной светлой косой... Нет!
О ней даже не думай!
Он должен наказать убийцу отца, даже если тот сам, обманутый Кривжей, ввязался
в бой. И тут же сердце возражало. "Так нельзя".
Ты ведь уже принял решение. Там, на поле боя, когда отец проиграл. Ты остановил
разъяренных печенегов, готовых разорвать ненавистного князя Владимира. А чудной
дед и мальчишка? Помешали бы они этому? Нет.
Разве парень мог сказать такие слова?
Перед глазами снова появляется та картина.
Молодой князь ударил Курю мечом и выбил из седла. Мальчик бросается к отцу. Орда
печенегов уже не сдерживает коней, рвущихся к бою. Ещё секунда, и они сорвутся
с места. Поляна обагрится кровью.
"Нет! -останавливает их Гийяр. - Это был честный поединок. Уходим!"
Воины нехотя, но послушно разворачивают коней.
Почему они послушали его? Он ведь не хан. Ответ один. Они доверяют ему. Как и
его отцу.
Так что же делать?
Глупый мальчишка. Ты всё решил. Ты возглавишь печенежский союз племён, но не посмеешь
напасть на Киевскую Русь.
Гийяр вышел из шатра. Кругом, куда не посмотри выстроилось войско. В ожидании
молодого хана они перешёптывались, спорили, ругались. Но тут все разом замолчали.
Они были готовы услышать решение Гийяра.
- Слушайте! Печенеги- свободные воины! -голос мальчика не дрогнул.- Вы можете
следовать за мной, а можете примкнуть к кангарам! Я же считаю, что мы должны разбить
половцев, которые давно зарятся на наши земли! Кто со мной?!
Ответом было одобрительное и дружное "Ура!"
Печенеги ушли с русской земли, но не навсегда. Через какое-то время они вернулись
с новым ханом, чтобы ввязаться в междоусобную войну между Ярославом Мудрым и Святополком
Окаянным, на стороне последнего.'
sentences:
- 'query: http://vk.com/audio?performer=1&q=Hollywood%20Undead%20Coming%20Back%20Down
Пустые улицы большого города, что освещались утренним солнцем. На переулках встречались
редкие прохожие, которые с удивлением смотрели на полураздетого парня, шагающего
вдоль улиц Магнолии.
"Сумасшедший,"- подумают многие, но разве это неправда? Нет, это, правда, это
чистейшая, правда, которая с каждым днём гложет его душу. Он был сумасшедшим до
того как она пришла в его жизнь, и остался таким после того как она ушла...
Каждый день, прогуливаясь по улицам утреннего города, он с улыбкой на губах, и
с грустью в глазах вспоминал все моменты, когда она ходила вместе с ним. Каждый
раз, ему чудилось, что она с ним, что она не оставила его, но это ведь неправда,
или правда? Что за чушь? Конечно, же, она уже не с ним, она там далеко наверху
от них...
Он всегда слушал её звонкий, но одновременно робкий голос, от которого так и хотелось
прижать к себе и не отпускать никогда, но он отпустил, туда, от, куда не возвращаются....
И теперь, её голос постепенно растворяется из его мыслей.
С ней, он чувствовал, что он не такой как все, с ней, он мог быть особенным, но
зачем теперь быть таким, если её нет? Теперь, он лишь одно лицо, в той толпе,
что окружила его. Возможно, его и заметят, но точно не как личность, его знала
лишь она, но она ушла...
Someday, someday
Но когда-нибудь, когда-нибудь
I know you''re coming back
Я знаю, ты вернешься...
Строчки сами всплыли у него в голове. Песня, которую он услышал недавно по радио,
всегда преследовала его. Не важно где, на улице, в метро, в машине, или дома,
он всегда вспоминал её... Ту, что изменила его жизнь, и ту, которая сейчас очень
далеко от него...
Шагая по улицам просыпавшегося города, он невольно прошёл-то место, где она покинула
этот мир. Воспоминания нахлынули его, но он попытался их отогнать, но разве это
возможно? Нет... Он это знал, но всё же противится судьбе. Какая ирония, не правда
ли?
Грязная улица, в старом районе Магнолии, насквозь была пропитана кровью, куревом,
и алкоголем. Она ведь предлагала уйти, а он не послушался её. Глупец. Маленький
наивный глупец, который ничего не боится, хотя всё совсем наоборот...
Маньяк, от которого они бежали, был пропитан безумством, но разве безумством называют
алкоголь? Кто знает, кто знает... А жертвы, пьяного человека, прибежали в какую-то
улочку, из которой убежать невозможно, или тупик... Одно всего лишь слово, а сколько
паники оно приносит... Пятясь, они встали, во что-то похожее не грязь, или это
и была грязь? Кто знает, кто знает... Замахнувшись, маньяк направил перечный ножик
на парня, и кинул его. Банально да? Но девушка, успела встать перед ним, и по
этому, удар пришёлся прямо в живот.
Что было дальше, парень помнил смутно, но было очевидно, на его глазах, за него
умерла его Джувия. Странно, да? Вроде бы должно быть наоборот, а получилось так.
На его руках, была её кровь, а он видимо потерял сознание...
Теперь, мир слишком серый, хотя нет, он кроваво-красный, такой же, как и её любимый
цвет. Странно да? Любит дождь, но любимый цвет красный... Но разве имеет ли это
значение, если Джувия ушла со сцены, навсегда? Возможно, это и был её час, а может
быть, она жила бы ещё дольше, но всё же прошла мимо, так же, как и всё хорошее,
в этом алчном мире...
И этот факт, заставил его повзрослеть, и принять реальность со всеми её составляющими...
Трудно, было лишь по началу, а сейчас он привык, и даже вполне справляется, но
что-то поменяется, и это, что-то произойдёт слишком быстро...
Она никогда не задумывалась, о том, будут ли её помнить, или не заплачут ли они,
когда она умрет, нет, у неё даже мыслей таких не было, ведь ты была в своих мечтах.
Думала, что будешь жить вечно, как бог, или дьявол. Она думала, что тебя не пробьёт
никакая пуля, но всё же ошиблась... Простой ножик отнял твою жизнь, и знаешь,
я не жалею об этом, ведь, ты всегда была с крыльями, как у ангела, хоть ты не
всегда была счастлива, пока, в твоей жизни не запели посланники Божьи...
И снова, его пробирает дрожь, он думает, что сейчас она подбежит, сзади, и, закрыв
глаза, скажет "Угадай кто?" И ты как обычно просто скажешь с улыбкой "Джувия.
Моя Джуви." И темноволосый поклянётся, всем богам, что она смущается от его слов.
Ведь это правда...
Будние дни потихоньку вытесняют её голос из его головы, а сам он погружается,
в суету города. Возможно, он забудет её, найдёт другую, и они заживут счастливо,
или будет одинок всё жизнь, вспоминая её. Кто знает, чем закончится эта история,
но однажды они встретятся. Через сто лет, через века, тысячелетия, не важно...
Они встретятся, и это главное....
А пока, темноволосый скажет лишь одно предложение, и с ухмылкой пойдёт в сторону
дома...
- Кода же ты спустишься, Джуби?
Всего лишь за несколько секунд, мир может поменяться, до неузнаваемости. Берегите
это время, и возможно, в будущем время само преподнесет вам подарок...'
- 'query: Больно... Слишком больно... Больно каждое мгновение, каждый вздох причиняет
просто немыслимую боль. А она лежит совсем рядом, бездыханная. Вокруг люди, но
он их просто не видит. Он видел как она умерла, она смотрела прямо на него в этот
момент и хотела поцеловать.
"Magic can do much, but not death".
Он отгородился от всего вокруг, осталась только боль. И вдруг вспышка ярости,
а перед его взором гаснущие глаза Белль. Ярость заставила его подняться с колен
и направиться к убийце. При ходьбе, в ноге проснулся вулкан, но это было уже не
важно. Боль в душе заглушала все остальное.
"Она умерла из-за меня".
Он подошел к убийце, тот лежал на земле и ухмылялся. Он выполнил свою месть и
был собой доволен. Мужчина с яростью начал душить убийцу своей тростью, но ухмылка
не сходила с его лица. Голда пытались оттащить, но ярость придала ему огромную
силу, и вскоре глаза пирата погасли. Но Голду не стало от этого легче, он лишь
почувствовал, что предал ее веру. Убив Белль, пират убил все человеческое в Голде,
остался только монстр. Монстр с разбитым сердцем.
Эмма хотела надеть на него наручники, но не смогла. Импульсом он отбросил ее и
ее родителей от себя. Его кожа начала меняться, и одежда также вмиг переменилась.
Совершив убийство, он снова стал крокодилом, безжалостным Румпельштильцхеном.
Он снова подошел к Белль и упал перед ней на колени. Он смотрел на ее холодное
лицо и просто не мог поверить, что ее теплые глаза не посмотрят с нежностью на
свое чудовище. Из груди вырвался жуткий нечеловеческий крик, будто он звал душу
любимой, но она не откликалась.
И снова ярость, безумная ярость и слезы. Она виновата лишь в том, что влюбилась
в чудовище, подарила ему счастье. Но ее смерть забрала все.
Было совсем не так, как тогда в замке. Тогда он хотя бы знал, что его принцесса
в порядке, теперь...
Слишком больно...
Румпельштильцхену не хотелось ступить за черту города и все забыть. Он не мог
забыть свою принцессу, хоть воспоминания и приносили страдания. Он хотел просто
умереть рядом с той девушкой, которая забрала его сердце.'
- 'query: До этого момента я ничего не чувствовал. Пустота и темнота. После того,
как эта тварь вцепилась мне в шею я успел мысленно со всеми попрощаться и был
готов к смерти. Но это была не она. На том свете не может быть так неопределенно
и темно. Значит, я все ещё жив. А раз так, я скоро выберусь отсюда...
По телу словно прошел электрический разряд. Я вздрогнул и распахнул глаза.
Честно признаться, я даже не сразу понял где нахожусь. И в данный момент меня
это совсем не тревожило, потому, что перед собой я увидел знакомое лицо мага.
Единственный, кто мне нравился после Джейса и с первого взгляда сумел заполучить
моё сердце. Да, это несомненно был он. За столькие годы впервые в стенах института
кто-то кроме сумеречных охотников.
- Магнус Бейн, - с максимальной холодностью, какая только возможна в данной ситуации,
прокомментировал очевидное я.
Так уж принято у нас, охотников. Не кинусь же я к нему на шею с поцелуями и фразочками
примитивных типа: "О, привет! Давненько не виделись. Я так скучал".
Чисто физически в данный момент я и не мог себе такого позволить. Да и видел я
мага всего во второй раз в своей жизни. Но, черт возьми, этого было достаточно,
чтобы подцепить меня на крючок как глупого карася!
- Александр Лайтвуд, - снисходительно улыбнулся Магнус.
Похоже после долгой отключки у меня начала кружиться голова, но это показалось
даже приятно.
Я открыл было рот чтобы задать вопрос, но гость опередил меня.
- Только не спрашивай, что я здесь делаю, - он прожег меня взглядом медовых глаз.
- Ходж переживал за твою жизнь и, как я вижу, не напрасно.
Это было именно то, что я и хотел спросить. Если он умеет читать мысли - это весьма
плачевно. Предпочитаю держать свои мысли при себе.
Бейна снова что-то развеселило и он криво улыбнулся отворачиваясь от меня. Это
подтвердило мои догадки.
- Ладно, не буду тебя смущать, - маг поднялся. - Ты еще слишком слаб и отдых лишним
не будет.
Магнус сделал странное движение похожее на полупоклон и направился к двери.
"Нет, так просто ты от меня не уйдешь".
Мне действительно хотелось, чтобы он побыл рядом, чтобы не уходил так быстро.
Он мне нравился, насколько можно судить по одной мимолетной встрече глазами. Хотя
я не готов был признаться в этом даже себе.
- Я нормально себя чувствую, - не своим голосом заметил я, провожая мага взглядом.
Тот не остановился. - Бейн! - снова никакой реакции. - Стой! - в яростном бессилии
я ударил рукой по кровати и мгновенно вернувшаяся боль заставила меня стиснув
зубы едва слышно простонать.
Маг замер.
- Хочешь, чтобы я остался? - в его словах я услышал весьма недвусмысленный намек.
- Я нужен тебе?
Фигура в черной мантии развернулась ко мне. Бейн вскинул бровь ожидая ответа.
- Ой, иди к дьяволу! - психанул я, неожиданно даже для самого себя, но пути назад
уже не было.
- ...
- Убирайся, - процедил я сквозь зубы, в душе умоляя его не слушать моих слов.
- Ладно, - примирительно тряхнул головой Магнус. - Прости.
Он вернулся к моей койке.
Я смерил его сердитым взглядом и "совершенно случайно" встретился с прекрасными
глазами. Его взор проникал прямо в душу, заставлял подчиняться воле хозяина.
Другого описания я бы дать не смог, но может только на меня он так действовал.
- У тебя тоже красивые глаза, - сделал комплимент он.
- Хватит! - я прикусил губу, чувствуя, как краска приливает к моему лицу.
- Да, извини, - опомнился Магнус. - Я понял, что тебе это не нравится. Что ж,
придется играть по твоим правилам. - Бейн пожал плечами и весьма глупо улыбнулся.
Мне осталось только покачать головой.
- Там, на вечеринке вампиров... - я замялся, - Как ты узнал, что я... гей?
- Глупый вопрос, не находишь? - сладко улыбнулся он, в то время как его обволакивающий
голос в очередной раз взял меня в свой плен.
- Да, пожалуй. - поспешно ответил я, и добавил как-бы между прочим, - Как ты меня
тогда назвал?
- Жгучей бестией, - маг будто ждал этого вопроса.
Я скорчил скептическую физиономию, а вот Бейн выпрямился и посерьезнел.
И неожиданно положил руку на мою ладонь, заставив меня невольно напрячься.
- Знаешь ли, - откровенно признался он, - я не занимаюсь благотворительностью
и не спасаю от смерти сумеречных охотников...
На секунду он замолчал, видимо размышляя, стоит ли продолжать признание или лучше
забить пока не поздно.
Я чуть наклонил голову в знак крайней заинтересованности и маг, вздохнув, продолжил.
- Мне было наплевать на просьбу Ходжа и поначалу я отказал ему... Но... после
того как он назвал мне имя... Будь на твоем месте кто-то другой, меня бы здесь
не было.
Магнус пронзил меня взглядом. И тут я решился.
- Мог бы сформулировать проще, - заметил я.
- В смысле?
- Ты мне тоже небезразличен.
Плечи мага затряслись от смеха.
- Что на этот раз не так? - возмутился я освобождая руку.
- Ты тоже не очень красноречив. Но не беда. Я не беседовать сюда пришел.
Бейн провел ладонью по моей щеке, отчего меня бросило в жар, потом наклонился
и осторожно коснулся моих губ.
Я подался было вперед требуя большего, но тот неожиданно отстранился.
- Что опять? - недовольно закатил глаза я.
- Мне нравится тебя злить, - признался Магнус скривив столь желанные губы в усмешке.
- Ты такой сексуальный, когда заводишься.
- Ты просто пользуешься моей дееспособностью. Если бы я мог подняться...
Маг не дал мне закончить угрозу и с новой страстью накрыл мои губы своими. На
этот раз я превозмогая боль обхватил руками его шею и сильнее притянул его к себе.
Тот слегка улыбнулся не прерывая поцелуя. Моя голова продолжала кружиться.
Наконец, тяжело дыша, Бейн отстранился.
- Пока ты в таком состоянии, на большее не рассчитывай, - с улыбкой пригрозил
он.
- Можно подумать, ты делаешь мне одолжение. - принял вызов я. - Тебе это нужно
не меньше.
- Ты наглеешь прямо на глазах, - вынес вердикт маг.
- Ты ещё плохо меня знаешь.
- Тогда до скорой встречи. Признаюсь, буду скучать, - Магнус Бейн уже во второй
раз за этот вечер направился к двери. И на этот раз я его не останавливал.
- Да, я тоже, - выдохнул я жалея, что не вижу сейчас его лица.
Дверь за Бейном закрылась. Я остался один и через пару минут вновь отключился.
С того дня минули почти два месяца. Я старался не думать о возможностях встречи
с магом. Чтобы первым делать шаг навстречу я был слишком горд... Или стеснителен,
но все-же многое в моей жизни изменилось после нашей встречи. В первую очередь,
я стал по-другому смотреть на Джейса. Он не привлекал меня так как раньше, хотя
несомненно все же нравился. Однако фантазии мои занимал исключительно этот высокомерный
и эпатажный тип - Магнус Бейн. Даже если я пытался мысленно послать его к черту
и выкинуть из головы, со временем все больше убеждался, что это невозможно.
- Алек, ты здесь? - в дверях появился Джейс.
- Что случилось? - я не оглянулся на него: не хотел видеть.
"Правда?"
- Ходж сказал - вечером отправляемся на охоту. Ты с нами?
- Естественно, что за вопросы?
- Просто ты выглядишь усталым, - с сочувствием заметил парень.
"Будто тебя это волнует", - с горечью подумал я.
- Тебе кажется.
Повисло напряженное молчание. Я хотел, чтобы он ушел. Впервые в жизни я не хотел
с ним разговаривать. С человеком, которого ещё недавно мечтал затащить в постель.
Боже!
- Слушай, - он вошёл в комнату и прикрыл за собой дверь, - мы же друзья... мы
братья. Если у тебя какие-то проблемы, ты не должен держать их в себе. Поговори
с нами. Со мной. Я уверен, что вместе мы легко справимся с твоей... депрессией.
Джейс положил руку на мое плечо. Раньше он часто так делал и мне это нравилось,
но сейчас его вмешательство в мою личную жизнь лишь нагнетало ситуацию. В последнее
время я всеми силами пытался унять уже много лет мучившие меня чувства к приёмному
брату. Бейн должен быть уверен в том, что кроме него мне никто не нужен.
- Нет у меня никаких проблем, я просто хочу побыть один. Ты можешь оставить меня
в покое? - как можно мягче постарался сказать я.
"Вообще-то он хочет помочь. Не его вина, что я гей".
- Я понимаю твои чувства... - осторожно начал он.
- Нет, - отрезал я, вдруг оглянувшись на него.
Тот даже отступил назад.
- Ты не понимаешь меня, а я не понимаю вас с Клэри. Это вполне логично, это совершенно
разные чувства и никто не винит тебя в моих переживаниях, но я разберусь с ними
сам. Я знаю чего хочу.
- Хорошо, - Джейс опустил глаза.
Никогда за ним такого не наблюдал.
- Твое дело, - пожал плечами он и, выйдя за дверь, вновь оставил меня в одиночестве.
Не знаю, сколько времени я простоял так размышляя, правильно ли поступил и не
обидел ли чувства брата.
Дверь в комнату открылась. Я решил, что Джейс вернулся, бросил на него недовольный
взгляд и... замер в изумлении.
На пороге стоял Магнус Бейн, одетый, как и всегда, в лучших своих традициях: в
серых облегающих брюках с заниженной талией и усыпанным блестками широким ремнем,
белой майке и кожаной куртке с бессчетным количеством цепей и заклепок. Броский
макияж составлял темную подводку глаз сверху и золотисто-перламутровую, потолще,
снизу.
Я понимал, что нужно как-то поприветствовать гостя, но, будто проглотив язык,
продолжал безмолвно пялиться на него.
- Пришел проверить, как ты себя чувствуешь, - без лишних формальностей сообщил
Бейн.
"Мое сердце упало на пол и откатилось к его ногам, оставляя кровавую полосу".
- Если бы ты хотел проверить как я себя чувствую - пришёл бы раньше. - с нотками
обиды ответил я. - Тебе просто заняться нечем? Решил прогуляться до института?
- А мог бы вообще не придти, - заметил маг.
Такая перспектива меня совершенно не вдохновила.
- У меня тоже есть чувства, Алек. Я мог колебаться, сомневаться, правильно ли
поступаю и хочу ли этого на самом деле. - Магнус подошел ближе и остановился напротив
меня.
Я стоял опустив голову и потупив взгляд. Кто я такой, чтобы обвинять его в чем-то?
Эгоист. Всё это время, страдая и мечтая о встрече, я ни на секунду не задумался
о том, что чувствует он. С каким трудом ему дается признание самому себе. У него
тоже есть гордость, на которую ему пришлось наступить, чтобы явиться сюда. Ко
мне. Снова.
- Ты бы сам ко мне ни за что не пришел, - снова в точку.
"Да".
- Прости, - шепчу едва различимо.
Маг изучает меня кошач'
- source_sentence: 'query: А если бы мы встречали Новый Год вместе, м?
Знаешь, я бы не хотела, что бы мы проводили его в гостях у кого-то из нас, нет.
Это была бы беготня, еще и суета, готовка и прочее, а это немного не то, чего
бы я хотела, это не то самое, не идеальное-неидеальное-простое-непростое. Может,
если бы нас какими-то силами свыше пустили к друзьям, то мы бы посидели, пообщались
бы. Это было бы классно, знаешь? Просто сидеть и говорить то, что хочешь, а потом
мы пошли бы в комнату, взяв что-то из выпивки и вкусностей. То, что выбрала бы
ты, ведь знаешь - я не умею пить. Бейлиз, может? И я либо не выпью больше глотка,
либо выпью очень много. Я люблю шоколад с не очень давних пор. Мы сели бы на кровать,
а ты сняла бы свои каблуки, высокие и очень крутые, жалуясь мне на усталость и
боль от них. Я пожалела бы, но все-таки не смога бы удержаться и не позлорадствовать
о том, что на мне балетки. Может, даже просто носки, раз уж у друзей. Ты положила
бы ноги на кровать, возможно, согнула бы их в коленях, и это выглядело бы очень
мило в роскошном платье. Нам было бы очень жарко, но, знаешь, не из-за того, что
мы рядом или что-то в этом роде, нет, а просто из-за того, что сумбурный и иногда
многолюдный Новый Год всегда так поступает. А ведь, казалось бы, зимний праздник.
Я бы хотела держать тебя за руку, ведь я люблю твои малюсенькие ручки, как у ребенка,
разве что со взрослым маникюром и взрослыми кольцами, которые так мешают сжимать
крепко руку. И ты бы разрешила, ведь так? Хах, возможно ты бы даже колола меня
своими большими ногтями, мы бы начали щекотать друг друга, в этом я уверена, а
ведь и так жарко, эх. Волосы растрепанные, мокрые от пота, фу. Но классно.
Мы болтали бы всю ночь, мы же любим это дело, да? Может, мы вспоминали бы что-то
из наших старых приколов в интернете, вспоминали бы наше общение до того, как
между нами началось зарождаться что-то большее, чем "Привет, кло*, как дела? Норм,
а ты?". Мы бы, думаю, посидели в гаджетах, посмотрели бы фотографии. Если бы у
меня попадалось что-то такое, чего не хотелось бы показывать тебе - неудачное
фото, к примеру - я начала бы прятать телефон, а ты пыталась бы отнять, чтобы
все-таки посмотреть. И наоборот. А под утро мы уснули бы.
И, знаешь, мы когда-то называли себя парой. Мы никогда не были парой, хах? Чертовски,
чертовски глупо. Но кто мы тогда? А никто не знает, ведь, как сказала как-то раз
ты, моя любимая ванилька и любительница статусов, "Определить - значит, ограничить".
Это, вроде, Оскар Уайльд, мне он нравится.
*производное от "клон", аналог "бро". Смысл в том, что было много общего.
Описать прогулку? Пусть это будет лето. Жарки Питер, м? Очень жаркий, как в прошлом
году, когда дышать было нечем, и я буквально купалась в Неве и ходила так мокрой
по городу. В Бургер Кинге на это даже не особо обратили внимание, как я помню
- понимали. Мы бы вышли из дома и поехали бы к метро на маршрутке. На маршрутке,
так как мы бы просто не выжили после сорокаминутной ходьбы от моего дома до метро,
хоть я уже и проходила через такое. И это ужасно. В метро мы купили бы жетоны
и немного поспорили бы по поводу питерских жетонов и московских карточек, я одержала
бы победу. Ведь, знаешь, жетоны в замкнутом кругу, их не нужно особо производить.
Карточки же одноразовые, они выкидываются и приходится производить новые. Мы бы
закончили на этом и продолжали бы шутить на тему "вражды" Питера и Москвы - наше
любимое. И, знаешь, Питер лучше. Войдя в вагон, мы бы сели, так как станция, на
которой живу, почти что конечная. Кто-то из нас положил бы голову на плечо другой,
это было бы очень мило и, возможно, неудобно. В пользу своего метро ты бы еще
сказала, что у вас есть WiFi и бесконечная ветка. Я бы согласилась, так как тоже
так считаю. Слишком много о метро, не так ли, хах?
Мы вышли бы на нужной станции и умирали бы от жары, таскаясь с сумками. Войдя
в ТЦ Галерея (он у нас большой и многолюдный, я его называю "местом всех встреч"),
мы бы, думаю, направились на предпоследний этаж к фудкорту. Мы бы долго решали,
где мы будем есть, хотя... нет, нет, недолго. Ты бы все взяла в свои руки и потащила
бы к... не к Макдональду, не захотела бы ты стоять в очереди. Может, блинчики
или крошка-картошка? Не знаю. Возможно, мы выбрали бы Бургер Кинг или KFC, ведь
там можно энное количество раз наливать воду. Взяли бы один стакан на двоих и
налили бы спрайт, наш особенный напиток. У нас были бы в нем две трубочки. А еще,
мы мыкали бы от удовольствия, попивая блаженный и, главное, холодный спрайт. Это
было бы мило, мы даже сказали бы это вслух. Доев, мы бы набрали еще раз полный
стакан спрайта и пошли бы ходить по торговому комплексу. Представим, что у нас
много денег с собой, м? Мы бы зашли рядом с едой в отдел, где много сладостей:
леденцов, мармеладок, карамелек, жвачек, сладких "лент", конфет. Набрали бы много-много
всего, чтобы потом ходить по магазином и есть это все. Мы взяли бы еще чего-нибудь
для моей племянницы, которая обязательно спросила бы: "А что вы мне купили?".
Мы ходили бы по отделам, я люблю "H&M" и "RESERVED" и не люблю "Zara". Хотя, скорее
всего, именно ее ты больше всех и любишь. Но мне было бы жутко скучно в ней.
Я купила бы себе футболок, а ты обуви. Я знаю, как ты любишь обувь. Я постоянно
бы нападала на клетку, вспоминая Винчестеров*, ты бы тоже со мной шутила на эту
тему, только без лишнего фанатизма и посмеиваясь надо мной. А я нашла бы и любимую
рубашку Миши, и любимую шапку Джареда, и любимый джемпер Дженсена. Уставшие, мы
вернулись бы домой и, было бы логично предположить, что мы завалимся в кроватку.
Но нет, конечно, ни в коем случае. Мы ляжем очень поздно, потому что нельзя спать,
когда мы рядом, во сне мы теряем время. Да?
*персонажи сериала "Сверхъестественное". Дальше идет перечисление ведущих акторов
данного шоу.
Я соскучилась, так давай побудем вместе в письменной форме еще, ты не против?
Только я не знаю, куда нам пойти. А давай просто лежать на кровати.
Так вот, мы просто лежим, и между нами молчание, наше любимое молчание. Та тишина,
которую нельзя назвать неловкой. У нас нет такой. Скорее, родная, теплая, привычная,
уютная. Даже когда мы разговариваем о чем-то действительно, так сказать, неловком
и волнительном, мы можем замолчать, и это не будет опять же чем-то неуместным.
Я бы сидела в айпаде, думаю, а ты в телефоне. О да, мы поспорили бы, что лучше
- эпл или андройд. Отняли бы друг у друга гаджеты, ища в них неудобства и недочеты.
Надеюсь, я одержала победу в этом вымышленном споре. Черт, а может, мы посмотрим
фильм или сериал? Я столько раз выказывала в переписках наших желание посмотреть
с тобой вживую что-то, я так хотела этого, и все еще хочу. Мы даже на расстоянии
несколько раз умудрялись так смотреть сериалы и фильмы. И так часто мы не досматривали,
ты всегда убегаешь куда-то по делам. Среди нас двоих только я лодырь, который
сидит дома, выходя из помещения лишь на учебу.
Хм, так что бы мы посмотрели? Давай "Faking it", давно хочу посмотреть его. Или,
если ты разрешишь включить что-то мной уже просмотренное, то я посажу тебя смотреть
"Shameless". Я правда хочу, что бы ты посмотрела его. Я постаралась бы не спойлерить,
но хорошо, что ты спокойно к ним относишься, и мне не надо опасаться за свое здоровье,
если я проговорюсь. Я буду говорить: "Вот, вот, смотри, сейчас такое будет". А
ты, надеюсь, меня заткнешь. Прозвучало так, почему-то, будто поцелуем, но нет.
Ой, а еще у нас будут чипсы и "наш спрайт". Хотя, я лучше возьму что-то мучное,
а не чипсы. Но ты, я знаю, их любишь. Если мы просидим так до самой ночи, мы захотим
спать, наверное. И, если мы не выключим и нормально не ляжем, то мы просто уснем
на фоне голосов Ноэля, Кэма, Эммы и прочих людей нам знакомых-незнакомых. Черт,
нет, мы же будем смотреть в озвучке. Как жаль. Я проснусь раньше, чем ты, но не
встану, я бду ждать, пока проснешься ты. Посижу в айпаде, может. Когда же ты проснешься,
я скажу, что ты просто невероятная соня. Я, кстати, хочу, что бы именно ты сделала
завтрак. Ты же умеешь готовить, ты не я.'
sentences:
- 'query: Не слушай меня, прошу! Я хочу тебе кое-что сказать, поэтому не слушай
меня. Я хочу это сказать тебе, прямо тебе, но чтобы ты не слышал. Или чтобы забыл
спустя мгновение.
Я ЛЮБЛЮ ТЕБЯ
Кричу это. У себя в голове. Знакомым. Не тебе. Всем, всему, всегда, но не тебе,
но никогда тебе. Господи, не слушай меня. Отвлекись от меня сейчас же, чтобы я
мог сказать тебе то, что хочу, и то, что чувствую. Тихо. Внутри это будет очень
громко, но не могу с такой громкостью сказать это вслух, не при тебе. Ты должен
отвлечься, тогда я скажу тихо, ты не услышишь. Может, ты спросишь: "Что ты сказал?"
И я отвечу: "Ничего, ничего". И ты не переспросишь снова, не станешь просить сказать,
потому что ты привык. Я часто говорю себе под нос и не отвечаю, что сказал, когда
переспрашиваешь. Тебя это так бесило. Может, и сейчас бесит, но ты уже не обращаешь
внимания.
У меня есть столько всего тебе сказать. Но когда ты говоришь: "Ну, расскажи что-нибудь.
Что угодно. Давай же". Я замолкаю. Я не знаю, что говорить. Спустя много минут
я могу рассказать что-то. Например, какой-то глупый сон или впечатления о только
что прочитанной книге. А ты любишь книги.
Господи, как я ненавижу то, что ты ненавидишь себя! Хватит!
Прекрати! Мне больно на это смотреть.
Но прекрати это делать не ради меня, а ради себя. Ты же знаешь, только так это
будет правильно. Не ненавидь себя, пожалуйста, не вини себя, когда знаешь, что
не виноват. Почему всегда все обстоятельства и весь мир против тебя, а? Мне так
жаль.
Сильнее людей я не встречал. Да, я не так много людей встречал... Но разве может
человек быть еще сильнее? Да может, наверное, кому я вру, мир большой, людей много...
но к черту, важно ли это? Нет. Важна лишь твоя сила и дух. Ты воин, ты мой воин.
У тебя даже есть такая футболка, на ней написано "warrior". Ах, как она идет тебе.
Воин.
Борись, Дин, борись, не сдавайся. Отдыхай, расслабляйся, но не сдавайся и живи,
живи, живи. Ты самый живой человек с мертвенно уставшей душой. Настолько живой,
что я с тобой рядом чувствую, что тоже живу. Такое ощущение, что до тебя я и не
жил. Потому что до тебя все было слишком иначе. Теперь я вижу мир другими глазами.
Может, дело в возрасте, вырос или что. Но все же ты принял огромное участие в
выстраивании мира вокруг меня, потому что ты огромная часть этого мира. Это можешь
даже услышать, послушать. Да я тебе даже это писал.
Ах, да я и говорил тебе, и писал, что люблю. Но мы же друзья, это нормально. Просто
то "я люблю тебя", что хочу прокричать... это не то "я люблю тебя", которое ты
говоришь с улыбкой, растрепывая мне волосы. Я люблю быть низким. Рядом с тобой.
На ум приходят строчки группы "A Big Great World"
I am feeling so small*
Черт, да эта песня вообще сплошной... я. Я почувствовал в ней себя тогда, когда
мы... перестали по моей вине общаться на какое-то время. Я убивался и даже бился
головой об шкаф, сидя на полу, кажется. Тогда к моим крикам о любви добавлялись
другие.
Say something, I''m giving up on you**
Не хочу больше чувствовать именно эту строку.
Потом я кинула тебе клип этой песни. Спустя какое-то время. Этим я призналась
тебе во многом, это было письмо с шифром, которое не должно быть никогда расшифровано.
Я кричала, а ты не слушала. Потому что я умею тихо кричать. И слава богу.
*Я чувствую себя таким маленьким/незначительным
**Скажи же что-нибудь, я отдаляюсь от тебя'
- 'query: Pov Хорхе.
Всем привет, меня зовут Хорхе Бланко. Я работаю в оффисе своего отца, совсем скоро
я стану главным. Мне двадцать четыре года, у меня есть девушка Макарена Мигель,
внешность у неё на любителя, но это не имеет значения главное что я люблю её.
Макарена она очень взрывная девушка, очень эмоциональная, не только в жизни, но
и в постели. Но от статуса бабника я не избавился, я продолжаю ходить по клубам,
признаюсь честно девушек я просто обожаю, но сплю только с одной. Ещё у меня есть
бывшая девушка Мартина, эта девчонка сводила меня с ума в школьные годы, она была
просто шикарна в постели, такое умеет только она. К сожалению или к счастью мы
расстались, правда совсем недавно, буквально полтора года назад, но мы продолжаем
общаться.
Встав с тёплой постельки я направился в душ, после бурной ночи с Макареной, целую
ночь она ублажала меня, ещё у неё очень слабый характер, я могу издеваться над
ней, но в хорошем смысле, в этом союзе главный я. Когда я встречался с Тини, доминировала
она, у неё сильный характер, она даст отпор любому. Приняв душ я вышел из ванной
и направился в спальню.
- Жожик, ты что сегодня работаешь? помоему у тебя выходной, куда ты собрался?
- послышался голос любимой из-под одеяла.
- Никуда я не собрался, я просто принимал душ, кстати сегодня идём в клуб, - сказал
я и надел на себя спортивные штаны.
- Ок, в восемь буду готова, твои друзья пойдут с нами? - недовольно спросила она.
Мои друзья самые лучшие люди в мире, они не только мои друзья, но и друзья Мартины.
Диего и Лодовика, эта просто двое невменяемых людей, но они такие милые. Руджеро
и Канди, они не очень любят клубы и шумные вечеринки, они любят побыть наедине
друг с другом, несмотря на то что мы разные, мы лучшие друзья. А ещё Мечи, она
лучшая подруга Ти, а заодно и моя.
- Конечно, куда я без них, не надо мне говорить что они плохие, - недовольно проговорил
я и вышел из спальни. Дом у нас большой, но тут всегда такой беспорядок. Это просто
невыносимо, Макарена почти не убирается, целыми днями сидит дома и бездельничает,
как только заговоришь про уборку и готовку, она начинает целовать меня, а дальше
вы знаете что происходит.
- Я и не собиралась, - послышалось из комнаты. Я спустился вниз и принялся искать
свою футболку, но найти её просто не реально. Спустя несколько минут, я всё-таки
нашёл свою футболку. Пройдя на кухню я залез в холодильник, но ничего съедобного
я там не нашёл.
- В этом доме, есть что-нибудь поесть? - прокричал я и поставил руки в бок. Макарена
спускалась со второго этажа, она была почти обнажённой, лишь простынь скрывала
её прелести. Хотя скрывать особо было нечего, грудь была очень маленькой, ноги
короткие, другое дело Марти.
- Что ты кричишь? Ты же знаешь что я не умею готовить, найми домработницу и вообще
зачем тебе еда когда у тебя есть я, - прошептала она и сбросила с себя простынь,
ткань упала к её ногам, оставив Макарену обнажённой. Я подошёл к ней и страстно
поцеловал её, я любил грубость. Я могу целую ночь над ней грубо издеваться и она
не скажет мне не слова. Повалив нас на диван я спустился к её груди....
- Бланко, ты в своём репертуаре? - нежный, но в тоже время наглый голосок послышался
у меня за спиной. Конечно это была Тинка, её голос я узнаю из тысячи. Встав с
дивана и подняв кареглазую за собой, я чмокнул Ти в щёчку.
- Ты чего здесь делаешь? Тебя никто не звал сюда, - злобно прошипела Макарена
и натянула на себя простынку. Я всегда был рад видеть Мартину, она всегда для
меня останется родным человеком, но я её не люблю.
- Не ори, Хорхе я пришла по делу, - серъёзно сказала она и посмотрела на меня.
Когда она звала меня по имени, то это значит что действительно что-то срочное.
- Да я весь во внимание, - спокойно сказал я и сел на диван.
- Можно поговорить наедине? - неуверенно спросила она. Что это с ней, первый раз
вижу её такой, неуверенной.
- Макарена выйди, - сказал я, та недовольно цыкнув покинула гостиную. Мартина
присела рядом со мной на диван, теребя в руках край своего платья.
- Хорхе, мне негде жить, я продала свой дом, хочу купить побольше, родители уехали,
а ключи не оставили, единственный вариант это ты, можно у тебя пожить? - с надеждой
спросила она и посмотрела на меня. У неё самые красивые глаза, которые я только
встречал в жизни.
- Хм, я конечно не против, но если Мака против, то ты извини, - сказал я, Макарена
конечно откажется, но не буду же я перечить собственной девушки, ради своей бывшей.
- Понятно, значит пойду к Мечи, - поникше сказала она и встала с дивана. Блин
что же делать, нельзя так поступать.
- Стой, давай спросим у неё, может она согласиться, Макарена иди сюда, - крикнул
я и поссотрел на Тини, на лице у неё было полное отчаинье.
- Чего надо? - грубо спросила она и встала напротив нас.
- Мака, понимаешь Мартине негде жить, можно она поживёт у нас? - спросил я и посмотрел
на неё, её лицо тут же изменилось.
- Ещё чего, пускай у своих друзей живёт, - грубо произнесла она и вздёрнула головой
вверх, она возомнила себя здесь королевой, ну уж нет, мой дом и моя бывшая будет
жить здесь.
- Макарена, ты похоже забыла чей это дом, Мартина будет здесь жить сколько нужно,
это не обсуждается, - прошипел я и злобно взглянул на Маку, она похоже расстерялась
и не знала что сказать, поэтому просто убежала в комнату.
- Может не нужно было так, я могла пожить у Мечи, - тихий голос раздался на ухом.
Я был очень зол.
- Замолчи, тебе сказали можно, значит можно, надоели, - прокричал я и ушёл в кухню.
Как же дальше жить, эти двое сведут меня с ума.
Pov Хорхе.
- Где мой лифчик? - кричала Мартина с первого этажа.
- Где мои штаны? - а это уже Макарена из соседней комнаты. Еслиб я знал что эти
две девушки сведут меня с ума за два дня, я бы никогда не пустил Мартину жить
к нам. Макарена постоянно меня ревновала и просто ненавидела Тини, а та в свою
очередь делала ей всё назло. Два дня подряд в моём доме не умолкают женские голоса,
соседи приходили уже три раза, я не знаю что с ними делать. А ещё у меня не было
секса уже два с половиной дня, только мы с Макареной начинаем процесс, Марти вваливается
к нам в комнату, за это я готов её убить. Сейчас я собираюсь на работу, Мартина
в институт, а Макарена на какую-то важную встречу, посиделки с подружками называются,
хотя подруг у неё маловато.
- Ну Хорхе, иди сюда, помоги найти мне мой лифчик, пожалуйста, - ныла Тинка со
второго этажа, я бы предложил ей надеть лифчик Макарены, но ведь он ей маловат.
- Мартина, я его даже в глаза не видел, - спокойно сказал я и спустился вниз.
Конечно вся эта ситуация меня не радует, но в этом есть плюс я могу видеть Марти
и её шикарное тело голым, пару раз когда она мылась я "нечаянно" заходил в ванну
и смотрел на Тини, она такая милашка когда злиться. Не было-бы у меня Макарены,
я бы вновь начал встречаться с Тини, сказать честно пару раз в постели я представлял
Ти, её волосы пахнущие шоколадом и стройные ножки, а у Макарены нет таких потрясающих
волос как у Тини, да и ножки не подарок.
- Ну и где его искать? - спросил я и начал искать этот чёртов лифчик Тини.
- Твоя девушка умеет убираться? Почему у вас такой бардак в доме? Я не могу так
жить, - простонала она и, устало села на диван. Эх, как же она права, Макарена
совсем забросила мой дом, нужно нанимать домработницу. Повернув голову в сторону
и приподняв непонятную вещь я обнаружил, чёрный кружевной лифчик и я сразу понял
что вещь принадлежит Марти, ведь у Макарены нет таких лифчиков, они у неё все
невзрачные и серые.
- Это твоё? - спросил я и поднял вещицу с тумбы. Марти заметно покраснела и попыталась
отнять у меня свою вещь, но я совсем не хотел этого делать.
- Оставь его мне, он очень красивый, он когда-нибудь тебя пригодится, обещаю,
- прошептал я на ушко Ти, она удивлённо посмотрела на меня, а потом пошло улыбнулась.
Она всегда понимала меня с полуслова, ей не нужны были мои намёки.
- Я к твоим услугам, а сейчас я пойду и возможно сегодня не приду ночевать, я
останусь у Мерси, она говорит что соскучилась по мне, я же всё-таки долгое время
была в Лондоне, - сказала она и скрылась в ванной.
- Ты была в Лондоне? Почему ты не сказала мне? - спросил я и натянул на себя брюки,
а затем и рубашку.
- Я знала что у тебя были проблемы с твоей девушкой и не звонила долгое время,
помнишь четыре месяца мы с тобой не общались, вот тогда я и была в Лондоне и,
знаешь я прекрасно отдохнула, - мечтательно сказала она и вышла из ванной, на
ней была одета юбка чуть выше колена, нежно-розовая блузка и туфли на шпильке,
это её обычная одежда в институт.
- Макарена мы уехали, я буду поздно вечером, а Тини сегодня ночует у подруги,
так что ты можешь целый день, а потом и ночь отдыхать, - спокойно сказал я и обул
лаковые туфли. Через два месяца я займу места своего отца, я в предвкушение этого
дня. Родители у меня очень порядочные люди, они просто ненавидят Макарену.
- Я счастлива, пока, - крикнула она со второго этажа. Я вышел из дома и направился
к своей машине. Сев в неё я принялся ждать Тини, иногда мне кажется что она не
бывшая, а настоящая. Любви конечно нет, но влечение как к женщине есть огромное.
Спустя минуту из дома вышла Мартина, бормача что-то при этом себе под нос.
- Твоя девушка просто идиотка, она не умеет ни готовить, ни убираться, даже нормально
формулировать свои мысли не умеет, как ты можешь с ней встречаться? - нервно спрашивала
она, усаживаясь ко мне в машину.
- Зато в постели она просто бомба, - сказал я и выехал со двора.
- Ясно, - произнесла она и отвернулась. Она что обиделась? Я не стал ничего больше
спрашивать, лишь продолжил путь к универу.
Закончив свой рабочий день я устало откинулся на спинку стула и тяжело выдохнул.
На часах восемь вечера, а домой совсем не хочется. Каждый день одно и тоже, хочется
разнообразия. Телефон который лежал на столе, начал настойчиво трезвонить, посмотрев
на экран я увидел незнакомый номер, нажав кнопку принять я поднёс телефон к уху.
- Алло, - тихо сказал я.
- Эм, здравстуйте, вы знаете девушек по имени Мартина Штоссель, Мерседес Ламбре,
Лодовика Комельо и Канделария Молфесе? - мужской голос раздался на том конце прово'
- 'query: Сегодня, прыгая на кровати, Кира сломала ее. Она отчаянно пыталась допрыгнуть
до потолка, но ничего не получалось, и опилки лишь тщетно сыпались на пол. Никто
не слышал ни скрежета пружин, ни грохота; не было видно и самой поломки.
Мать, ругая дочь, в ответ получила лишь усталое равнодушие, что, конечно же, вывело
ее из себя. Крича что-то нечленораздельное, она стучала ногой по сломанному предмету.
Женщина не понимала, что она делала только хуже, но гнев в ее крови взял верх.
- Да как ты смеешь, паршивая девчонка! Я только и делала, что ухаживала за твоей
кроватью! А ты решила устроить погром?! Знаешь, что?! Я это так не оставлю! -
на этих словах женщина, чуть ли не снимая дверь с петель, выбежала из комнаты.
Кира резко опустилась на колени. Прижав руки к кровати, она пыталась сдерживать
ее невыносимый скрежет.
Взяв молоток и гвозди из кладовой, девочка безнадежно колотила по обломкам, пытаясь
хоть как-то их соединить. Но все оказалось безрезультатно: обломки лишь с еще
большим стремлением раскалывались под гнетом гвоздей.
Она легла на пол. Легкий сквозняк щекотал ее спину.
- Я никогда не смогу допрыгнуть до потолка, - сказала Кира и выдохнула.
- А вдруг это не так?
Кира резво встала. На ее лице появилась маска недоумения, а в груди начал разжигаться
огонек страха. Откуда этот голос?
- Не бойся, глупышка, - голос был очень мягок.
- Откуда ты? Я тебя раньше не слышала...
- А разве это важно?
- А что, нет?
- Почему это должно быть важно? Разве нельзя просто поговорить с тобой?
- Ты думаешь, я буду говорить с незнакомым голосом?
- А почему нет?
- Так. Мне надоедает эта игра в вопросы. Говори, что или кто ты есть?
Внезапно наступило молчание, после чего последовало продолжительное гудение.
Голос начал напевать песенку, не песню, а именно песенку. Любимую песенку Киры,
которую она заводила каждый раз, когда ломалось что-нибудь в ее комнате.
- Я могу построить тебе новую кровать. Гораздо лучше этой. В ней будет много цветов
и сладостей...
Девочка оживилась. В ее речи послышались нотки радости.
- Правда? Ты сделаешь это?
- Да, но вот только...
- Что "только"?
- Только она будет не настоящей. Ты не сможешь на ней спать, но она будет в твоей
комнате. - голос откашлялся. - Ах, да. Кроме тебя ее никто не увидит.
Девочка задумчиво улыбнулась.
- Но когда же я смогу увидеть свою кровать?
Голос начал смеяться. Сильно, долго, но мягко. Этот смех был очень и очень необычен:
вроде бы и добрый, а вроде бы и с насмешкой.
Жалость.
Жалость управляла им.
- Почему ты смеешься?
- Да потому что ты глупая девочка, которая даже не может решить.
- Я вовсе не глупа!
- Да? Так ответь: тебе нужно то, что я предлагаю?
- Но это же вовсе не настоящая кровать! - Кира приложила руки к лицу. - На ней
я не смогу допрыгнуть до потолка!
Голос опять залился смехом.
- ПОЧЕМУ ТЫ СМЕЕШЬСЯ ВСЕ ВРЕМЯ?!
- Да потому что ты уже решила. Уже давным-давно решила.
- И что же я решила?
- Ты согласна, ведь так?
Кира замешкалась, но, все же, выдавила из себя неуверенное "да".
Голос пропал, оставив после себя огромную кровать, с большим матрасом и мягкими
подушками. На такой кровати, определенно, можно было бы допрыгнуть до потолка.'
- source_sentence: 'query: Тяжёлые портьеры с тихим шелестом разъехались в стороны,
впуская внутрь свет. Он был серо-белым и больно бил по привыкшим к темноте глазам.
Я потушил стоявшие на ночном столике свечи и поднялся с кровати.
Этот день не предвещал ничего необычного. Я как всегда проснулся на несколько
мгновений раньше всеобщей побудки. На автомате привёл себя в порядок и облачился
в свой длинный чёрный балахон, подпоясав его серебряным шнурком - знаком жнецов.
Надел на голову ненавистный обруч, который сильно давил на лоб, но без которого
я не мог появиться вне своей кельи - он указывал на мою принадлежность к Высшей
касте. Последний штрих - Кольцо верности.
Моя келья располагалась на самом престижном ярусе - нижнем. Местные ещё называли
его Колыбелью теней. Там имели право находиться только жнецы Высшей касты - те,
кого Смерть выбрала своими советниками.
Каждый рассвет я поднимался на отсчётный ярус, где получал указания от Старейшин,
а затем приступал к своей работе. Я - собиратель душ.
Я пересёк учебный зал, где обучались молодые жнецы. Когда-то я и сам сидел на
этих скамьях и тщательно внимал каждому слову профессора. Наверное, именно мои
старательность и упорство помогли мне дослужиться до Высшей касты.
Само место, где мы обитали, называлось Храмом. Он располагался вне пространства
и времени. Храм был сделан из белого, красного и чёрного мрамора и имел очень
высокие своды. Кое-где они достигали такой высоты, что их невозможно было увидеть.
Такие своды назывались небесами.
Обитатели Храма жили в кельях - просторных круглых комнатах. Поскольку времени
для нас не существовало, но внутри всё равно тикали биологические часы, были придуманы
рассветы и закаты. Храм был окутан белой, светящейся пеленой, поэтому на закате,
когда требовалось ложиться спать, портьеры на всех окнах задвигались. На рассвете
же наоборот - они разъезжались в разные стороны. Делалось всё это в единое для
всех время, и изменить закрытие и раскрытие портьер даже в личных кельях было
невозможно.
Наконец, я на своём ярусе. Сегодня в зале было на удивление пусто - внутри оказался
лишь один Старейшина. Когда я вошёл, он стоял спиной ко мне, опрокинув назад голову
и разглядывая небеса.
- Вам следует быть более расторопным, брат Рихард, - сказал он спокойно, не отрывая
взгляда от небес. - Особенно когда для Вас припасено такое задание.
Его слова заставили меня оживиться: они обещали что-то посложнее простого сбора
душ.
- И каково же оно? - спросил я, стараясь придать голосу безразличие.
- Не обольщайтесь, это не просто интересная игра, - Старейшина наконец обернулся
и посмотрел мне в глаза. - Вы будете иметь дело с очень необычным экземпляром.
Чуть приподняв полы путавшегося в ногах балахона, Старейшина подошёл ближе ко
мне.
- Мы давно охотимся за душой этого человека. Смерть не раз вносила его в свои
списки, но каким-то неведомым образом он уже четыре раза от нас ускользал. Опытнейшие
жнецы не смогли справиться с этим заданием. Тогда Смерть отдала приказ отправить
к этому человеку именно Вас, - Старейшина сделал ударение на последнем слове.
- Не подведите её. Иначе Вы знаете, что Вас ждёт.
Он бросил взгляд на Чёрное око. Это колодец, располагавшийся в центре этого зала.
Он использовался в качестве наказания для неверных или просто неугодных Смерти
жнецов. Они подвергались так называемому Перевоплощению. Виновного сбрасывали
в чёрное жерло этого колодца, после чего он исчезал из Храма навсегда. Ходил слух,
что казнённые обретали новую жизнь на земле, только в другой ипостаси (поэтому
процедура называлась Перевоплощением). Однако не было никого, кто бы смог подтвердить
это или опровергнуть, поэтому все боялись быть подвергнутыми Перевоплощению.
- Я ни разу не позволял Вам и Смерти усомниться в своём профессионализме. Задание
будет выполнено, - сухо произнёс я.
- Приятно слышать уверенность в Вашем голосе, но не теряйте бдительности. На этот
раз Ваша вылазка в мир людской будет длительной. Времени Вам будет дано столько,
сколько потребуется, но помните - Смерть не любит ждать. Вам нужно войти в доверие
к этому человеку, узнать его как можно лучше, чтобы его душа сама потянулась к
Вам. Тогда Вы сможете беспрепятственно её забрать, и задание будет считаться выполненным.
Воровато оглянувшись, Старейшина склонился к моему уху и шепнул:
- А ещё в случае успеха я позабочусь о том, чтобы Вы примкнули к нам.
Он намекал на то, что поможет мне получить самую высокую должность Храма - должность
Старейшины.
Я коротко кивнул.
- Я могу приступить к заданию?
- Разумеется. Служки соберут Вам всё, что потребуется для людской жизни, и Вы
можете немедленно отправляться.
Я снова кивнул и направился к выходу. Уже у самых дверей я обернулся и спросил:
- А как зовут моего подопечного?
- Тилль Линдеманн.
Едва я почувствовал, что твёрдо стою ногами на земле, как на меня налетел сильный
порыв холодного ветра и начал нещадно резать оголённые участки кожи. Я потёр замёрзшие
руки, прижал их ко рту, чтобы согреть своим горячим дыханием, но от этого было
мало толку. Шапкой меня снарядить не удосужились, поэтому голова тоже ощущала
все прелести выдавшейся в этом году суровой немецкой зимы.
Я осмотрелся. Вокруг лишь засыпанные снегом деревья. Никаких признаков того, что
поблизости кто-то обитает. Выкидывать меня прямо на порог объекта было опасно
и подозрительно, поэтому я оказался в некотором отдалении от места назначения.
И теперь, стоя в этом заснеженном лесу, я понятия не имел, куда мне направляться.
Я решил поддаться интуиции, которая меня редко подводила. Жнецы имеют много способностей:
например, мы умеем подавлять сознание людей или становиться невидимыми, когда
нам это необходимо, но почему-то внутреннего компаса в нас не встроено.
Ноги проваливались в глубокие сугробы. Чертыхаясь, я медленно продвигался вперёд,
внимательно выискивая вдалеке хоть какие-то признаки человеческого жилья, однако
тщетно. Я мысленно перебирал все те знания, которые когда-то получил об этом регионе,
но так и не смог припомнить, чтобы тут были характерны такие погодные условия.
Даже подумал о том, что вышла ошибка, и меня закинули в какое-то другое место.
Время шло, а пейзажи вокруг не менялись, зато начинало темнеть. Я уже начал беспокоиться,
но не давал панике полностью овладеть мной. Только когда над лесом поднялся ополовиненный
диск луны, я обессилено повалился на землю, подложив под себя рюкзак, в котором
был собран минимум одежды. Рук я уже совершенно не чувствовал, а голова раскалывалась
от холода. В мыслях крутилось, что такой мастер своего дела как я не может так
глупо провалиться, но сил дальше двигаться не было.
- Я лишь немного... отдохну, - успокаивал я самого себя, пытаясь разлепить настырно
закрывающиеся веки. Но тело меня уже не слушалось. Перед тем, как окончательно
отдать себя в объятия сна, я, кажется, услышал скрип снега.
Придя в себя, я первым делом почувствовал запах молока. Было на удивление тепло,
даже жарко. Приоткрыв один глаз, я увидел напротив себя задорно пляшущие язычки
пламени в камине. Опустил взгляд вниз - моё тело плотно закутано в толстое одеяло,
а сам я лежу на каком-то диванчике.
Где я?
В голове неохотно начали ворочаться шестерёнки воспоминаний, проворачивая назад
сегодняшние события. Рассвет, Старейшина, задание, лес, холод, обморок, пустота.
Я ещё раз по мере возможностей осмотрел себя, даже заглянул под одеяло. Кто-то
заботливо притащил меня из леса, стянул всю одежду и согрел. Окинул взглядом комнату
- моего добродетеля не наблюдалось.
Я осторожно попытался подняться на локтях, на что тело отозвалось неприятной ноющей
болью. Скривившись, я всё же привёл себя в сидячее положение и ещё раз тщательно
всё осмотрел. Комната была небольшой и весьма неухоженной. Повсюду валялись какие-то
обёртки, на полу стояли пустые бутылки, кое-где виднелись и грязные тарелки. Обои
местами отходили, обнажая деревянные стены, практически всё было исписано именами
каких-то людей и различными странными названиями, значения которых я не понимал.
Однако было видно, что хозяин любит свою берлогу и ни за что не расстанется с
ней.
Входная дверь протяжно заскрипела, и в комнату вошёл здоровенный детина, державший
в руках кружку, по размерам больше похожую на ведро.
- Очухался, - буркнул он, протягивая мне "ведро". - Пей.
Тёплое молоко. Я не был большим поклонником этого напитка, но внезапно ощутил,
что именно его сейчас требует мой организм, и жадно припал к кружке. Парень, сидевший
напротив и угрюмо наблюдающий за мной из-под отросшей чёлки, смотрел на меня несколько
минут, после чего спросил:
- Какого лешего тебя в лес-то понесло?
Я кое-как оторвал себя от молока.
- Машина на трассе заглохла, пошёл за помощью и заблудился, - выдал я заранее
подготовленную легенду.
- Считай, пропала твоя тачка, - фыркнул незнакомец. - У нас тут сейчас не самый
спокойный народец обитает. Скорее всего, уже прибрали к рукам твою лошадку.
Я постарался сделать разочарованный вид.
- Эх, ну что ж я так!
- Ты в полицию попробуй обратиться, но это вряд ли тебе поможет. Меня кстати,
Тилль зовут.
Я аж подавился, услышав имя своего спасителя.
- Эй, ну ты аккуратнее! - Тилль дернулся в мою сторону, но я жестом остановил
его.
- А я..кхм-кхм...Рихард, - прохрипел я, пытаясь перебороть приступы кашля.
- Застрял ты, похоже, здесь, Рихард, - Тилль кивком указал на окно. На улице не
на шутку разбушевалась метель.
- Удача, если мы завтра вообще сможем из дома выйти, - вздохнул Тилль. - Уже неделю
метёт, зараза, и всё никак не успокоится. Так что обождать придётся, прежде чем
я докину тебя хотя бы до Шверина.
Я неуверенно пожал плечами, чтобы Тилль не подумал, что я напрашиваюсь.
- Только ты, слышь, сам себе будешь жрать готовить, я тебе не хозяюшка. Ну и мне
заодно можешь, - ухмыльнулся Тилль. - Арендная плата, так сказать, за предоставленную
жилплощадь.
Я согласно кивнул.
- Ну и ладушки. Ах, да, пшёл вон с моего дивана!
Помня о том, что мне нужно наладить с этим человеком контакт, и злить его ни в
коем случае не следует, я послушно встал, укутавшись в одеяло.
- Пф, тоже мне, - хмыкнул Тилль, глядя на то, как я пытаюсь поудобнее завернуть'
sentences:
- 'query: scrtrm - piano #11
В тихом-тихом углу улицы(если это вообще угол) сидели два ребенка примерно одинакового
возраста точно не достигнувшие десяти, а то может и девяти лет от роду. Вокруг
не было никого из тех шумных взрослых, которых все дети почему-то недолюбливали.
Однако, кроме этих двоих, на улице изредка пробегали, а то и проезжали, на полной
скорости подростки на велосипедах и на скейтбордах. Некоторые из них тихо и не
спешно ездили на роликах, оборачиваясь по сторонам и смотря ларьки с мороженным.
На улице лето. Июнь вроде бы, а дети уже во всю развлекаются, забыв про учёбу.
А те два ребёнка всё так же сидели на двух лавочках, параллельных друг другу,
изредка бросая взгляды на друг друга и поспешно отворачиваясь, когда один из них
замечал эти взгляды. Шум листьев рядом цветущей поздней черёмухи добавлял романтичности
картине. Быстрые школьники не раз смеялись над ними, не задумываясь над тем, что
чувствуют эти дети. Первая любовь всегда приходит по разному.
Мальчик с голубыми волосами, одетый в тонкую светло-оранжевую рубашку и загнутыми
до колена коричневыми брюками подошёл к девочке и спросил:
- П-привет. Тебе не нужен кот? С-синий такой, с чёрными глазами и маленьким рюкзачком
на спине?
- Н-нет, спасибо, - ответила девочка, отворачиваясь в сторону. - Да и у меня уже
есть кот. Второго родители не разрешат.
- Понятно, а этого кота Хэппи звали. Добрый такой, отзывчивый, понимающий. Хороший,
в общем, кот.
- А у меня кота, ну вернее не кота, а кошку, Шарли зовут. Но она странная, не
разговаривающая, грубая, даже стер... Как там взрослые говорят, сте-рво-зная,
она у меня. И общаться не любит.
- Я уверен, Хэппи бы нашёл к ней подход и разговорил бы её. И она б стала такой
же как и он.
- Не знаю, я, то есть мы никогда не пробовали.
- А давай попробуем и узнаем! Ну, так ты со мной?
- Х-хорошо!
Через девять лет
- Шарли, а ты бы хотела стать кошкой? Ну или её завести?
- Нет, Хэппи, у меня уже есть один. Одного достаточно.
- Вспоминаем детство, а Шарли?
- Угадал, Хэппи! Вспоминаю детство и то самое место.
- Ну а всё же, кто этот кот?
- А почему ты думаешь, что это кот?
- Так ты же сама сказала! Склеротичка!
- Я не говорила такого!
- Говорила!
- Нет!
- Не нет, а да!
В старом парке на двух параллельных скамейках сидела молодая пара, которая о чём-то
спорила и ни одна из сторон не хотела уступать. А парк всё так же никто не посещал,
лишь изредка подростки катались на велосипедах и роликах. А на дворе всё так же
был июнь и под шум поздней черёмухи влюблённые шли по старому заброшенному углу
улицы старого отстроенного парка.'
- 'query: Бабах! - новая молния с грохотом разрезала небо. Непогода продолжалась
уже третий день. Кагура поёжилась. Дождь она не любила - такая погода лишала её
единственного доступного ей счастья - полётов. Общение с Кохаку успело надоесть
хуже горькой редьки - парень не говорил ни о чём, кроме как о способах убить Нараку.
Девушка, конечно, понимала, почему он очень был сосредоточен на своей мести, но
- ветер свидетель! - нельзя же было думать только об одном! Настроение не улучшало
и то, что с самого начала грозы сам Нараку был необычайно деловит: создавал новое
порождение, что Кагуру немало беспокоило, и варил какое-то чрезвычайно вонючее
зелье, отравлявшее воздух.
- Кагура! - голос Нараку заставил демоницу вздрогнуть. - До моего приказа из замка
- ни шагу! Ты мне понадобишься!
- Напомню, что в дождь я летать не могу, - стараясь сдержать в своём голосе гнев,
ответила девушка. После того как Нараку избавился от Хакудоши, поглотив ребёнка
за то, что заподозрил его в предательстве, Кагура старалась быть максимально осторожной.
- Я никогда ничего не забываю, - в небрежно брошенной фразе Повелительнице Ветра
послышалась угроза - Нараку словно бы напомнил ей про попытку улететь с двумя
осколками Шикона.
Два последующих дня каждый был занят своими делами: Нараку корпел над своими котлами,
Кохаку тренировался, Кагура боролась с завистью к Канне, которая эмоций не испытывала
и от смеси страха с нервным ожиданием не мучалась.
Всё проходит. Прошли и дождь, и время приготовления зелья, и время создания нового
порождения. Кагура рассматривала сидящую перед ней девушку и ощущала, что что-то
с ней явно было не так. И лишь через минуту поняла - она вообще не чувствовала
новое порождение Нараку! Не было ни демонической ауры, ни запаха, ни малейшего
звука - даже стука сердца не улавливали чуткие уши демонессы!
- Кагура, что застыла? Я же сказал тебе - слетай и принеси своей сестре одежду!
- Нараку заметно возвысил голос.
- В замке и так одежды полно - пусть сама подберёт! - ответила та.
- Я сказал: слетай и принеси, - сквозь зубы прошипел полудемон.
"Чёрт, что этот паук задумал, что я услышать не должна? Точно какую-нибудь гадость,
чтобы меня помучительней убить! Что же делать, что же делать?! Сбежать? Нет, у
него моё сердце... Дьявол, мне поставлен мат!" - рассуждала Повелительница Ветра,
идя по коридорам.
- Кагура-доно, что-то случилось? - демонесса так ушла в свои мысли, что не заметила
появления рядом Кохаку.
- Кохаку-кун... - девушка замялась и присела на одно колено - смотреть на парня
сверху вниз сейчас ей совершенно не хотелось, - в моей комнате есть шкатулка...
красная с синими птицами...
Охотник на демонов внимательно смотрел в глаза если не подруге, то уж точно союзнице,
и старался угадать, что у той на душе.
- ...прямоугольная. Длиной... немного длиннее моего веера, - Кагура на двух руках
показала своё оружие, и ребёнок кивнул в знак того что запомнил, - шириной - примерно
как твоя ладонь от пальцев до основания ладони. Закрытая. В общем, если... хотя
скорее уж "когда"... меня убьет Нараку, постарайся передать её как-нибудь Сессёмару.
Ты ведь его помнишь?
Кохаку снова кивнул.
- Скажешь, что это от меня. Пусть ломает замок - ключ всегда при мне.
- А Вы?
- Ты же знаешь, какие небезопасные игры мы с тобой затеяли... - Кагура встала,
провела ладонью по волосам парня и вышла из замка. Кохаку украдкой смахнул набежавшую
слезу.
Когда через полчаса девушка вернулась с ворохом одежды, разговор уже явно завершался.
- Отлично. Раз уж их сила в единстве и воле каждого - вряд ли что-то сможет уничтожить
их лучше. После такого даже Кохаку их перебьёт без проблем! - казалось, даже голос
нового порождения был неуловим, появляясь словно из неоткуда и ежесекундно меняя
тембр.
- Кстати, забыл представить, - полудемон явно был чем-то доволен, - её зовут Чиё.
Этой ночью вы действуете вдвоём. Компания Инуяши разделилась - это их и погубит.
Кстати, Кагура, ты хорошо мне послужила. Если этой ночью не облажаешься - получишь
своё сердце.
"Интересно, я хотя бы рассвет увижу?" - мысленно попрощалась с жизнью Повелительница
Ветра.
- Кагура, повторяю специально для тебя - делаешь всё, что прикажет Чиё, - с ядовитой
усмешкой промолвил Нараку. Повелительница Ветра первую фразу прослушала - в тот
момент полудемон демонстративно отдавал Чиё сердце первого порождения Нараку.
Вместо ответа женщина взмахнула веером, и перо с двумя демонессами взлетело.
- Эй, Чиё, а что ты умеешь, а? Как будешь Инуяшу и остальных убивать?
- Даже не пытайся на меня напасть, чтобы отобрать сердце - я сильнее, - проговорила
своим непонятным голосом та. Кагура дернулась: "Догадливая стерва!"
- И я не буду их убивать. Я сделаю так, что они сами все будут искать смерти.
Давай быстрее, нас ждут, или, вернее, не ждут в трёх местах.
- В трёх? - удивлённо вскинула бровь Повелительница Ветра. - Группа Инуяши разделилась
на три?
Чиё коротко хохотнула. Хохот был столь же неприятным, как и голос.
- Не твоё дело. Начнём с...
- Санго, уверен, их души пребывают в лучшем мире, - тихо проговорил Мироку.
- Спасибо за поддержку, хооши-сама.
С момента резни в замке Нараку и селении охотников на демонов прошёл ровно год.
Санго, оставив Инуяшу, Шиппо и Кагомэ в деревне Каэдэ, отправилась навестить могилы
своих родственников и товарищей. То, что девушка взяла с собой Мироку, парень
воспринял как знак особого доверия и, несмотря на всё своё желание, старался себя
вести подобающе, что ему пока удавалось. За день он с Санго обновил надгробные
таблички, помог, по её просьбе, повыдергать разросшиеся на могилах растения, совершил
нужные обряды.
- Эх. Интересно, где сейчас Кохаку? - вздохнула охотница.
Монах поднял к глаза к небу: его потаённые молитвы услышаны не были. Парень до
последнего надеялся, что девушка не вспомнит про брата - каждый раз, когда это
происходило, она впадала в тяжёлую меланхолию.
- Санго-тян, мы обязательно найдём его. Найдём и вырвем из лап Нараку, - Мироку
обнял охотницу. Та хныкнула ему в плечо. Кирара принялась утешительно мяукать.
- Хооши... можно Вы... поспите в одном доме со мной? - всхлипывая, тихонько попросила
девушка.
- Санго, я могу спросить...
- Я просто не хочу оставаться одна. Надеюсь, моё предложение Вы не воспримете,
как разрешение распускать руки! - последнюю фразу Санго произнесла уже более жёстким
тоном. Мироку подавил вздох разочарования.
- Ладно, обещаю вести себя соответственно своему духовному званию.
- Спасибо. Пойдём укладываться - скоро стемнеет.
- Блин, ну вот понесла бабку нелёгкая чёрти куда на ночь глядя! - раздражённо
рявкнул Инуяша.
- Заметь, я, как мико, должна была бы пойти сама! И пошла бы, да только она запретила
- сказала, что незачем лекарства из моего мира всем подряд показывать - вопросы
ненужные возникнут.
- А она не права?
Тррррр.
- Шиппо, хватит уже - достал до одурения своей трыкалкой! - Инуяша бросил в кицунэ
злой взгляд.
- Отстань! Мне Кагомэ-тян игрушку принесла, и я буду играть! - выкрикнул ребёнок,
прикрывая собой наполненный бубенчиками мячик.
- Чёрт, ну уши же вянут и голова звенит! Кагомэ, дура, зачем эту гадость принесла?
- Сидеть!
Трещ.
- Это не гадость. Это игрушка. Инуяша, ну он же ребёнок - пусть играется! И нет
бы за себя порадоваться - смотри, сколько я твоей любимой лапши принесла! - мико
постаралась переключить его внимание на что-то приятное.
Тррррр.
- Хоть какая-то польза от тебя и твоей эпохи, - буркнул полудемон.
- Хам!
- Стерва!
- Сидеть!
Трещ.
- Второй раз уже! - проорал беловолосый парень с пола.
Трррррррр.
- Заслужил! - в тон ему ответила Кагомэ, потирая пальцами виски - звон мячика
начал и у неё вызывать головную боль, но доброта, подпитываемая упрямством и нежеланием
согласиться сейчас с полудемоном, не давали ей запретить играть лисёнку. - Интересно,
как там Мироку с Санго?
- Поди этот блудист там разошёлся, - ответил парень, поднимаясь с пола.
- Инуяша, Мироку-сама знает, когда нужно сдержаться.
- Угу. Как от нашей охотницы получит - так сразу начинает сдерживаться.
Тррррр.
- Ладно, давайте спать! - Кагомэ демонстративно улеглась в спальный мешок, надеясь,
что её примеру последуют и остальные. Вскоре в хижине действительно стало темно
и тихо.
Кагура и Чиё приблизились к первой цели.
- ...сюда...
Санго, услышав непонятно чей голос, открыла глаза и увидела, как на неё падает
человеческая тень. Девушка отреагировала полностью инстинктивно - пнула тень двумя
ногами в живот. Та охнула.
- Сопротивляешься, недотрога? Не бойся - я оттрахаю тебя приятно!
Девушка с удивлением узнала по голосу Мироку и поняла, почему Кирара её не разбудила,
как она это обычно делала при появлении чужих запахов.
- Эй, ты что, головой ударился? - ошарашенно спросила охотница. Вместо ответа
парень снова прыгнул на Санго. Та откатилась со своего футона в сторону и поморщилась
от боли - под локоть попали осколки какого-то сосуда, и пара впилась в кожу.
"Какого чёрта! Он, что ли, напился? Нечем вроде! И что тут делает эта керамическая
гадость - мы же только убрались, и на полу ничего не было! - судорожно пыталась
разобраться Санго, просыпаясь окончательно и уворачиваясь от новой попытки схватить
её. - Точно! Значит, отрава такая!"
- Не уйдёшь теперь! Сделай же мне приятно! - Мироку в очередном скачке удалось
навалиться на девушку. Одну руку он смог захватить, вторую при неудачном кувырке
прижала собой сама Санго. Ноги девушки парень придавил своими, так что ударить
та не могла никак. Обычного мужчину тренированная охотница смогла бы без проблем
сбросить, но её нынешний противник тоже был хорошо подготовлен. Свободной рукой
парень разорвал бельё девушки.
- Кирара, убери его!
Единственной причиной, по которой кошка не вмешалась раньше, были особые отношения
между её хозяйкой и компаньоном - если бы кто другой попытался сотворить такое
на её глазах с охотницей, он уже через секунду валялся бы с разорванным горлом
и выпущенными кишками. Теперь же, наконец-то получив приказ, Кирара мгновенно
обернулась и просто снесла в прыжке ударом тела Мироку с Санго.
- Держи, но не трогай! - крикнула девушка, бросаясь в угол, где лежали её вещи.
Большинство сост'
- 'query: Кто придумал праздновать Рождество всем вместе? Какой садист? За что он
меня так ненавидит?
За окнами была метель, которая, казалось, отрезала наш домик от всего остального
мира. А мне хотелось отрезать себя от всех в принципе. Угораздило же меня согласиться
помогать Риху на кухне. Надо было предвидеть, что с ним будет он.
Я крошил овощи для салата с особой сосредоточенностью, стараясь не поднимать глаз
от разделочной доски, но у меня не всегда это получалось. Я не удержался и бросил
взгляд на них: Пауль подошёл к Рихарду сзади и приобнял его за талию, Рих мягко,
но настойчиво отодвинул его руки, намекая на то, что они сейчас не одни, но всё
же довольно улыбнулся. Я непроизвольно сжал рукоятку ножа ещё сильнее, так, что
костяшки пальцев побелели, и принялся крошить овощи ещё мельче, превращая их в
кашу. Я упорно пытался загнать обуревающие меня чувства и эмоции куда-нибудь в
самый дальний уголок своей души, стараясь, чтобы они не проступили у меня на лице.
Какое мне дело до наших голубков? Никакого.
Рука Рихарда плавно легла на мою. Я вздрогнул от неожиданности и выронил нож,
жалобно звякнувший от удара об пол.
- По-моему, уже достаточно, - мягкий голос лид-гитариста нежно обволакивал моё
заиндевевшее сердце, словно тёплое одеяло, и заставлял его оттаивать. Рих поднял
с пола нож, положил его на стол, забрал у меня доску с овощами и направился обратно
к плите. С некоторых пор у Рихарда появилась привычка ходить по дому с обнажённым
торсом, и сейчас, когда он дефилировал по кухне в фартуке поверх чуть смуглого
голого тела, мне стоило больших усилий не пялиться на него и не захлёбываться
слюнями. Поэтому, бросив короткий взгляд на его спину, я постарался как можно
быстрее отвернуться.
С недавних пор мне стало казаться, что Пауль за мной следит. Я не раз замечал
на себе его прищуренный взгляд. Может, ревнует? Ну и пускай, пусть хоть немного
помучается.
- Спасибо, Тилль, - настойчиво произнёс Пауль и посмотрел на дверь. Уж не знаю,
был ли этот жест случайным или же преднамеренным, но я и так понял намёк. Угрюмо
кивнув, я вышел, прикрыв за собой дверь. Стоило мне отойти на пару шагов, как
я услышал, что со стола упали кастрюли (или же были специально оттуда скинуты
чьими-то нетерпеливыми руками). Я стиснул зубы и отошёл подальше от кухни.
Шнайдеру кое-как удалось уговорить меня спуститься к ужину. Я резко захлопнул
блокнот, в котором уже почти не осталось чистых страниц, неохотно поднялся со
стула и спустился в столовую, мрачно думая, что впереди ещё неделя такого кошмара.
Почему надо было тащить меня в этот чёртов загородный дом, где вокруг один лес?
И на глазах вечно Рихард с Паулем... Я был уверен, что вид этой парочки за столом
сразу отобьёт у меня аппетит. Например, когда Пауль снова как бы незаметно положит
Рихарду руку на колено, отчего у Риха покраснеют уши, он начнёт нервно посмеиваться,
затем как бы случайно уронит вилку на пол, нагнётся, чтобы неловко клюнуть своего
любовника в руку, после чего у Пауля на лице обязательно появится довольная улыбка
сытого кота. Ненавижу.
Вся группа уже сидела за столом, орудуя столовыми приборами над приготовленным
нами ужином. Моя порция одиноко стояла в стороне - значит, они не были уверены
в том, что я приду. Я молча взял её и сел за стол, никак не отреагировав на пожелания
приятного аппетита. Кажется, они думают, что у меня творческий кризис. Это было
бы мне на руку - меня побаивались трогать, предоставляя меня самому себе. А вот
рука Пауля медленно потянулась вправо, к Рихиному колену. Во рту всё окислилось,
я с трудом сдержал рвотный позыв. Не стоит заводиться... Не стоит заводиться...
Не стоит заводиться!
Я откинул вилку в сторону, за столом воцарилась гробовая тишина. Я знал, что ещё
несколько минут никто ничего не скажет: все будут дожидаться, пока я уйду, чтобы
не попасться под горячую руку.
И я ушёл. Зачем портить людям настроение. У них ведь атмосфера праздника. Завтра
же грёбаное Рождество.
Оливер зашёл ко мне, когда было уже за полночь. Я сидел за столом и усердно строчил
в блокноте. Рифма хлестала из меня без остановки, меня словно тошнило строчками
стихотворения; мне было плохо, но всё, чем я мог себе помочь - это писать и писать,
выплёскивая из себя всё то, что накопилось.
В комнате было темно: я писал, сидя у окна, и мне вполне хватало света фонаря
на улице.
- Я могу чем-нибудь тебе помочь?
Я едва сдержался, чтобы нервно не засмеяться: конечно, можешь, Олли! Задуши Пауля.
Нет, лучше застрели. Или утопи. Сделай хоть что-нибудь, чтобы это маленькое чудовище
не появлялось рядом с Рихардом!
- Нет, спасибо, у меня всё в порядке.
Оливер недоверчиво посмотрел на меня, но не стал ничего говорить, за что я был
от души ему благодарен. Я порой завидовал его спокойствию, терпению и, пожалуй,
мудрости. У нас с ним значительная разница в возрасте, но порой мне кажется, что
он значительно опытнее меня. Наверное, всё дело в его спокойном и немного загадочном
взгляде. Говорят, глаза - зеркало души. Я бы с большим интересом заглянул в душу
к Оливеру. Пожалуй, это один из немногих людей, чья душа меня в принципе интересует.
Мы молчали. За окном в причудливом вальсе кружились снежинки, а ветер, словно
строгий балетмейстер, сурово ворчал на них. Тикали настенные часы, тихо и глухо,
будто подстраивались под стук моего сердца. Оливер ещё стоял за моей спиной. Мне
казалось, он хочет ещё что-то мне сказать, но я не понимал, почему он медлит,
вроде бы в его взгляде не было нерешительности. В конце концов, он развернулся
и направился к выходу, очевидно, передумав и решив оставить этот разговор на потом.
Он лишь спросил напоследок:
- Тебе что-нибудь привезти из города? Мы со Шнайдером забыли лыжи, а нам уж больно
хочется покататься, пока снег не начал превращаться в грязь.
- Захватите мне чистый блокнот.
И Оливер ушёл, снова оставив меня наедине с блокнотом. Черкнув ещё пару строк,
я понял, что фонтан идей иссяк. Я задумчиво пролистал исписанные неаккуратным
почерком страницы. Осталась лишь одна чистая. Только сейчас я понял, что все стихи,
написанные здесь, посвящены Рихарду.
Я решил лечь спать не столько потому что устал, сколько из желания убить время.
Время... Как же я его ненавижу. Вероятно, потому что упустил его. Пока я писал
свои стихи, время капало, словно вода из плохо закрытого крана. И я опоздал. Теперь
Рихард разделяет свою широкую кровать не со мной, а с тем, кто оказался проворнее,
быстрее, сообразительнее. А я так и остался лежать под холодным одеялом один,
наедине со своими стихами, которые я никому и никогда не покажу. И я постепенно
становлюсь колючим, и, кажется, обрастаю коркой льда.
Сон не приходил. Я уже потерял счёт тому времени, в течение которого я лежал на
спине, глядя на белоснежный потолок, по которому скользили тени снежинок. Тишина
гудела в ушах. Хотя я довольствовался ей не так уж и долго.
Спустя какое-то время, к моему негодованию, я услышал скрип кровати в соседней
комнате. Я до последнего надеялся, что мне это показалось. Нет. Не показалось.
Скрипы стали сильнее, деревянные башенки у изголовья кровати начали стучаться
б стену, и я услышал его голос. Громкий протяжный стон, затем серия чуть более
частых. Мне казалось, я слышал даже его дыхание.
Меня словно ударили в грудь, выбив весь воздух из лёгких. Было больно и обидно.
Я пожалел, что поселился рядом с Рихардом. Я накрыл голову подушкой, стараясь
не слышать этого кошмара, но тщетно; вскочил с кровати, начал ходить по комнате,
подавляя желание броситься в соседнюю и раскидать любовников как котят. Хотелось
обличить, пристыдить их. Впрочем, перед кем? Все итак давно обо всём знали и,
как взрослые, адекватные люди, закрывали на это глаза. Так что я просто выставлю
себя дураком.
Какофония из стонов и скрипов становилась всё громче. Я обессилено упал на кровать,
моля Бога, чтобы скорее всё закончилось, и закрыл глаза. Слышны были стоны только
Рихарда. Я представил, что он сейчас здесь, со мной, что это я сжимаю в руках
его ягодицы, оставляя на них иссиня-красные полумесяцы, что это я нежно вхожу
в него, что он подо мной вздрагивает от наслаждения...
В паху медленно созревал огненный шар. Он обжигал всё внизу живота, периодически
посылая наиболее жаркие импульсы. Я приспустил резинку пижамных штанов, всё ещё
не открывая глаза. Моё дыхание заходилось, стоны по ту сторону стены учащались,
горячая плоть была готова лопнуть от напряжения.
Два громких, резких выдоха. Его - мой. В моём, кажется, проскользнуло эхо его
имени, которое обожгло мне горло.
Всё вокруг стихло. Тишина снова прокралась в мои уши.
Мне было гадко. Но всё-таки я был счастлив от того, что нам с Рихардом было хорошо.
Я бы смог удовлетворить его не только физическое желание, я бы смог доставить
удовольствие и его душе. Я бы многое смог...
Откинув со лба влажные от пота волосы, я повернулся на бок и достаточно быстро
уснул.
Меня разбудили холодные лучи-иголочки недружелюбного зимнего солнца, проникавшие
в мою спальню. Едва я разлепил глаза, будильник на прикроватной тумбочке услужливо
показал мне время: без пяти десять. Я впервые за долгое время чувствовал себя
хорошо. Мне казалось, что несколько предыдущих недель я карабкался в высокую,
крутую гору и, наконец достигнув вершины, скатился с неё вниз, будто на санках,
наслаждаясь скоростью и свистом ветра в ушах. Казалось, что я только что передал
эстафетную палочку и был рад, что предыдущий, мой этап остался позади. Пускай
я его и проиграл.
Я не стал нежиться в кровати, упиваясь своим хорошим настроением, а спешно встал,
накинул халат и спустился вниз, в столовую. Там был один лишь Флаке, он сидел
за столом, читая книгу и прихлёбывая кофе маленькими глоточками.
- Доброе утро!
Клавишник оторвался от книги и дружелюбно улыбнулся.
- Доброе. Как спалось?
- Замечательно! - широко улыбаясь, сказал я. Впервые за последнее время мне ничего
не снилось, чему я был несказанно рад, ведь все мои сны крутились вокруг Рихарда.
Либо это были кошмары, в которых он был с Паулем, либо это были прекрасные сны,
в которых мы были вместе и любили друг друга, и от этого мне было невыносимо больно
просыпаться. Поэто'
- source_sentence: 'query: Фэш думал, что Василиса - глупая рыжеволосая шпионка, которая
выполняет любые приказы своего отца, идет по стопам Огнева для достижения высшей
точки власти. Думал, она пытается разрушить жизни Ника и старшего Лазарева. Парень
считал, что девочка лишь втирается в доверие, делает вид, что она такая добрая
и милая, вечно краснеющая и невинная красавица. А на самом деле внутри ее души
свернулась кольцами змея, ожидающая момента, когда придется вонзить свои зубы
в шею противника и впрыснуть яд.
Фэш думал, что Василиса никогда не сможет научиться летать. Люди, расхаживающие
по земле, не могут почувствовать крылья за спиной, "отключить руки" и взмыть в
голубое небо. Не способны почувствовать порывы ветра на своей коже и понять, какого
это превратиться в птицу.
Драгоций уверял себя в том, что совершенно не завидует ее умению находить выход
из сложных ситуаций, улыбаться, продолжать шутить и веселиться, хорошо зная о
приближающемся нападении Астрагора. Драгоций считал правильным скрывать эмоции,
не открывать толпе свои чувства, мысли и страхи, ведь так живется легче. Поэтому
об этой легкой зависти никто и не знал. Даже Ник.
Фэш думал, что Василиса вечно будет выделять из толпы, потому что уважающие себя
часовщики не делают всякие акробатические номера в свободное время. И, тем более,
не лазают по деревьям.
Парень считал, что Василиса - глупая девчонка, потому что дерзит и перечит старшим,
постоянно пререкается с компанией Марка, подбрасывая в пламя напряженности все
больше сухих поленьев. Парень точно знал, когда-то она заплатит за все свои действия
и колкие слова, и даже позволял себе растягивать губы в фальшивой улыбке, размышляя
об этих недалеких временах.
Драгоций ненавидел ее за то, что она одарила его сочувствующим взглядом и попыталась
пожалеть, понять, когда узнала об его сиротстве. Фэш считал, что девочка могла
не спрашивать у него о старых временах и не пытаться подбодрить, что ему жалость
совершенно не нужна.
Фэш думал, Василиса - слишком слабая для того, чтобы выжить после случая с Алым
Цветком, ведь никто ранее из носителей черного ключа не выживал. Наверное, поэтому
Драгоций решил помочь ей. На парня повлияла милая улыбка Огневой. Фэш тогда для
себя решил, что в последний раз делает ей подобное одолжение и дарит жизнь, пообещал
не вспоминать о злополучном дне возвращения к Астрагору из-за рыжеволосой.
Драгоций думал, что Василиса не победит великого духа, пусть даже выучит тысячу
эферов, что Огнева - немощная и бессильная девчонка, которая только и умеет, что
помогать другим и строить из себя героиню дешевого романа. Он пообещал, что не
присоединится к ней, не будет помогать. Фэш считал, что лишь делает вид и притворяется
ее другом.
Драгоций думал, что не влюбится в нее, не пойдет на поводу каких-то нелепых чар
рыжеволосой, но он лишь в очередной раз ошибался.'
sentences:
- 'query: Гарри крутит колесико зажигалки, Тикки заканчивает есть свое печенье и
смотрит на него укоризненно, как бы говоря прекрати-это-парень-не-поможет (но
он не может, черт, попросту не может, потому что курение - это привычка, единственный
быстрый способ перестать желать смерти Элеонор, потому что, хэй, он супергерой,
ему нельзя убивать невинных людей).
Гарри виновато улыбается и з-а-т-я-г-и-в-а-е-т-с-я сигаретным дымом до жжения
в горле, черных точек перед глазами и адской боли где-то между ребер, вспоминая
переплетенные пальцы любви всей его жизни и девушки, отравляющей его существование
с самого первого класса.
У нее папа - мэр города, вещи от мировых брендов, красивая (но пустая) внешность
и целый воз ужасных поступков, и Гарри понятия не имеет, что Луи в ней мог найти
(нашел).
- Ты сильный, - говорит его маленькая подруга, садясь на плечо. - То, что Луи
начал встречаться с Элеонор, не так страшно, как создания, с которыми ты постоянно
сражаешься.
(гораздо страшнее)
- Конечно, - хрипит Гарри вместе с косой, натянутой улыбкой на лице, и Тикки ему,
кажется, верит, кладя маленькую ладошку на щеку в знак одобрения.
Гарри шестнадцать. Он должен ходить на свидания, веселиться с друзьями и наслаждаться
жизнью, но вместо этого он спасает Париж чуть ли не ежедневно, летая по городу
на дурацком йо-йо, словно человек-паук, и полагаясь на силу маленького браслета
(красного, в крапинку, как крылья Божьей Коровки).
(а еще Гарри пытается скрыть от всех свою (не очень) маленькую влюбленность в
Луи Томлинсона, его одноклассника, рядом с которым не может связать даже двух
слов и удержаться на ногах.
теперь это неважно, потому что (его) Лу встречается с Элеонор, и парень чувствует,
как в груди что-то жжет, а желание жить с каждым днем уменьшается в геометрической
прогрессии, ой-ой).
Первым замечает Нуар, и не то чтобы Гарри удивлен этому факту, просто как-то странно,
что этот раздражающий, дерзкий и совершенно-не-похожий-на-взрослого кот почувствовал
его притворство.
- Все в порядке, Божья Коровка? - спрашивает он, когда они побеждают Рефлекту,
и остается несколько минут до превращения.
Зеленые глаза за маской выглядят (по-настоящему) обеспокоенными, и Гарри хотел
бы поверить в реальное волнение Кота о своей жизни, но это не в его силах.
Гарри фыркает и пытается держать себя в руках (вдох-выдох, мальчик), потому что
они ведь напарники, и он не обязан открывать свою душу, верно? (к тому же, говорить
с Нуаром о любви все равно что с ребенком, он рассмеется, не поймет)
- Разумеется, с чего ты взял, что что-то не так, котик? - Гарри легонько бьет
его по носу, Гарри смеется и делает вид, что он действительно в порядке, пусть
внутри и все ноет и скандирует пиздец-пиздец-пиздец-я-так-облажался (снова).
Нуар морщит нос и смотрит еще более пристально, перехватывая его ладонь и качая
головой, не верю, мол, придумай что-нибудь получше.
- Я просто чувствую, - ушки на его голове дергаются, и Гарри дергается тоже, пытаясь
уйти (убежать) и вернуться домой, чтобы покурить в одиночестве, но хватка Кота
крепкая, а голос отчаянный, почти умоляющий, когда он просит рассказать (поделиться).
- Это не твое, черт возьми, дело, - шипит Гарри, все-таки вырывая руку, и спрыгивает
с крыши. - До встречи на следующем задании, напарник.
У Луи - морские глаза, ослепительные улыбки и вечный румянец на щеках (и он такой
красивый, господи, что Гарри готов поклоняться ему, как своему личному Богу).
Иногда Луи замечает его взгляды и подмигивает, и Стайлс, правда, знает, что это
все несерьезно, но ничего не может с собой поделать, улыбаясь так широко, что
болят скулы, потому что, черт, он влюблен, влюблен, влюблен так сильно, так глупо,
так по-детски (кажется, уже целую вечность).
Гарри мечтает взять Луи за руку (чтобы тот позволил это сделать), превратиться
в чертову Божью Коровку и подняться на самый верх Эйфелевой, чтобы обниматься
с ним над всем Парижем.
(и Гарри знает, что в шестнадцать он должен желать заняться сексом с человеком,
который нравится, но этого нет, потому что уже четыре года Луи хочется только
л-ю-б-и-т-ь, и ничего больше).
В понедельник Луи целует Элеонор на глазах у всего класса, и Гарри чувствует,
как внутри него взрываются (города, вселенные, сверхновые), но молчит (когда любишь,
всегда молчишь).
Гарри приходит в свою пустую квартиру (родители уехали по работе на неделю куда-то
в Америку, и не сказать, что Стайлсу все равно, или что он не скучает, просто
ему не до них сейчас, правда) и падает на диван, желая только одного - умереть
(впервые так сильно за последнее время).
Он вытаскивает из комода бутылку вина, хотя ему еще лет пять как нельзя принимать
спиртное, и пьет весь вечер, пока перед глазами не начинают летать черные точки,
а голова кружиться. Гарри думает, что притворится больным и останется завтра дома
(нет никакого желания видеть любимого человека, который счастлив с другим).
Тикки говорит, что он не может просто так взять выходной, зло не дремлет, и все
такое, но Гарри все равно, он верит, что Нуар справится сам.
В итоге Квами вытаскивает его на задание, и Гарри ненавидит ее так так сильно,
что не хочет видеть еще ближайшие несколько дней.
Кот уже на месте и смеряет его (снова) взглядом, наполненным волнением, но Гарри
лишь отмахивается и отворачивается, чтобы быстренько выпить еще одну таблетку
ибупрофена - голова раскалывается так, будто там целый пчелиный улей.
Они едва не заваливают эту битву, потому что Нуар постоянно оглядывается на Стайлса
и пытается его прикрыть, а Гарри просто чувствует себя зомби (мозги не соображают,
тело едва слушается), но в конце концов все заканчивается как всегда, хорошо,
и Гарри обессиленно прислоняется к стене какого-то здания, прикрывая глаза (сейчас
бы оказаться на необитаемом острове, посреди дикой природы и бушующего океана).
- Ты можешь рассказать мне, что происходит, я пойму, - говорит Нуар, подходя к
нему, и Гарри хмыкает, потому что Кот все еще последний, кому бы он доверился.
- Все в порядке, - цедит Гарри с раздражением. - Просто небольшие проблемы в жизни
и похмелье.
- Я лишь хочу помочь, - в голосе парня проскальзывают обиженные нотки, и Гарри
(почти) его жаль.
- Мне никто не может помочь, понимаешь? Мне нужно, чтобы все оставили меня в покое,
и перестали пытаться что-то выяснить и исправить.
- Дай мне шанс, - просит Нуар и, черт, он действительно волнуется, теперь Гарри
не может это игнорировать. - Мы можем встретиться где-нибудь и поговорить. В костюмах,
конечно же.
И Гарри не знает, что руководит им, когда он говорит "да".
В среду к Стайлсу приходит Зейн (лучший друг из категории вместе-с-детства-и-навсегда).
Он приносит с собой "1+1" и мороженое, но когда видит состояние Гарри, то просто
утягивает его с собой на кровать, чтобы обнять.
- Как ты думаешь, от любви можно умереть? - спрашивает Гарри, положив голову Зейну
на грудь и прислушиваясь к мерному стуку сердца, чувствуя (господи-боже-мой-наконец)
умиротворение.
Зейн тихо смеется над глупостью Гарри, но все равно обнимает крепче, прижимая
к себе, потому что знает, что нужно его младшему другу (знает, как лечить его
больное сердце).
- Хаз, ты же знаешь, что Луи с Элеонор не смогут быть дольше недели, она достанет
его своим отвратительным характером и писклявым голосом, - шепчет он. - Твоя любовь
никуда от тебя не денется.
Гарри тихо всхлипывает и прижимает ноги к телу, сворачиваясь клубочком. Он выглядит
истощенным и слабым, и Зейн чувствует боль от этого, но ничего не может поделать
(разве что врезать Томлисону, хоть это и глупо).
Гарри чувствует себя маленькой букашкой, которую вот-вот растопчут, у него нет
сил, энергии (и веры в лучшее, кстати, тоже с недавних пор).
Гарри думает, что не достоин быть супергероем и спасать мир, потому что не может
спасти даже самого себя.
Гарри обещает себе стать чуть сильнее (ради всего мира) и поговорить-таки с Нуаром
(потому что вина гложет его изнутри, и с ней надо что-то делать), ведь именно
это и делают супергерои - забывают о себе, заботясь о других - верно?
(на следующий день Гарри замазывает круги под глазами и выкуривает последнюю сигарету,
выкидывая пачку к черту с хватит себе под нос.
Гарри учится терпению, начинает здороваться с Луи и Элеонор и даже поздравляет
их, и пытается меньше смотреть на парня (последнее не получается, но ни одну великую
цель нельзя достигнуть сразу, так что)).
В четверг Гарри покупает чипсы, конфеты и газировку (он понятия не имеет, что
именно любит Кот) и идет на встречу с Нуаром в один из самых малонаселенных районов
города, где никто точно не заметит их, даже не надеясь на то, что разговор с ним
хоть как-то поможет ему.
- Ты здесь, - вскрикивает парень, вскакивая со своего места, и его глаза загораются
искренней радостью, когда он видит конфеты и все остальное. - Господи, тебе не
нужно было все это приносить.
Гарри пожимает плечами, потому что, черт, это же ерунда, и усаживается, откидываясь
на спинку лавочки.
- Давай начнем сразу. Чем быстрее, тем лучше, верно? - усмехается он, сплетая
свои пальцы в замок от внезапного чувства неловкости и испуга.
- Ты расскажешь мне, что происходит? - недоверчиво интересуется Нуар.
- Ты ведь не отстанешь, а мне уже осточертела твоя забота, словно я маленький
ребенок. Со мной не случилось ничего такого, от чего надо оберегать. Просто человек,
которого я люблю уже четыре года, встречается с другим, и это, оказывается, гораздо
хуже, чем описывают в книгах. Мне кажется, что я сгораю заживо, когда вижу их,
идущих по коридору, и это убивает, понимаешь? Потому что я знаю, что у меня нет
никаких шансов, я всего лишь глупый троечник с последней парты, у которого ни
внешности, ни богатого отца, в отличие от его чертовой девушки с отцом-мэром,
- заканчивает Гарри с полным опустошением внутри, ведь это первый раз, когда он
рассказывает свою историю кому-то, кроме лучшего друга.
- Я думаю, я понимаю, - кивает Нуар, выглядя серьезным и неожиданно удивленным,
и приобнимает его за плечи, заставляя прижаться к себе.
Гарри тяжело дышать после такого откровения, и его сердце бьется чересчур быстро,
так что он забывает о том, что они с Кот'
- 'query: Аято сжимает кулаки, глядя на уже въевшееся в душу имя. "Юи Комори".
Он - её Господин. И она обязана подчиняться ему. И, следуя этому правилу, сейчас
должна воскреснуть.
Злоба пожирает его, заставляя тихо рычать. Обычно подвёрнутая штанина опущена.
- Знаешь, так ты выглядишь немного... Небрежно.
Удар - по надгробному камню проходит трещина. Фото расщепляется надвое. Он готов
поспорить, что она сейчас стоит позади него. Губы поджаты, девушка едва ли сдерживает
слёзы. Руки нервно теребят и без того помятую юбку. Он готов поспорить, что сейчас
она тихо скажет что-то про то, что хозяин могилы будет недоволен. И он готов поспорить,
что если он обернётся, она исчезнет.
Аято устало облокачивается на дерево. Дождь приятно охлаждает разгорячившееся
тело.
- Я не разрешал тебе умирать...
И он готов поспорить, что она сейчас улыбается.
Рейджи садится на скамейку. Луну скрыли тучи - он уверен, что скоро пойдёт дождь.
Вампир элегантным движением поправляет очки. И почему ему захотелось придти сюда
именно сейчас?...
Чутьё вампира не обмануло. На каменный портрет падает несколько капель, а через
несколько секунд дождь уже льёт стеной.
Рейджи так и не двинулся с места, несмотря даже на настойчивое мяуканье за спиной.
Видимо, не выдержав, на скамейку запрыгивает небольшая кошка. Сиреневые глаза
чуть светятся, а насквозь мокрая белая шёрстка больше похожа на половую тряпку.
- Знаешь...
В шуме дождя голос парня едва различим, но кошка лишь наклоняет голову набок.
- Это крайне не вежливо. Из-за тебя я тут промок до нитки.
На шее кошки едва различимо поблёскивает миниатюрный нательный крестик на серебряной
цепочке... Наверное, показалось.
- Райто, а ты всегда носишь эту шляпу?
Вампир усмехается. Он сам не знает, когда и зачем он начал носить эту шляпу. Но
девушка действительно заметила - при ней он всегда был в шляпе. Был...
Райто провёл пальцами по трещине, разделяющей камень на две части. Парень тут
же распознал запах брата. Прикусив губу, он вновь присел на скамейку. Он уже знал,
кому сегодня влетит по полной программе.
- Считай себя избранной, маленькая стервочка.
Чёрная шляпа с красной лентой ложится на замысловатые узоры камня. Вампир усмехается
и встаёт.
Обострённое обоняние тут же улавливает знакомый запах. Усмешка тут же перерастает
в широкую довольную улыбку. Зелёные глаза хитро прищуриваются.
- Мы ещё встретимся, маленькая стервочка.
Запах клубники с примесью металла.
Канато вновь и вновь вглядывается в так полюбившиеся ему черты. Несмотря на новизну,
изображение на надгробном камне уже слегка стёрлось. Тонкие пальцы судорожно сжимают
плюшевого медвежонка, перебирая короткую искусственную шерсть. Парень до сих пор
не понял, как это могло случиться. Он отчётливо помнил, как вошёл в её комнату,
намереваясь испугать девушку. Он отчётливо помнил ту безмятежность, что застыла
на лице Юи. Он отчётливо помнил тот неистовый холод, исходящий от коченеющего
тела девушки.
Слёзы катятся из глаз, оставляя за собой мокрые дорожки. Он редко плакал. Уж лучше
смеяться, не правда ли?
- Знаешь, ты права.
Уголки губ парня чуть приподнимаются. Слёзы начинают течь с новой силой. Улыбка
становится ещё шире, обнажая белоснежные клыки.
- Смех продлевает жизнь, ведь так?
И он смеётся. Несмотря на то, что уже задыхается от рыданий.
Шу чуть щурится и отворачивается от фонаря, по его мнению так неуместно расположенному
здесь по его же просьбе. И о чём он тогда думал?! Ах да, о Юи...
- Знаешь... Я вовсе не боюсь темноты, Шу. Просто я хочу видеть лицо того, кто
скрывается в этой тьме.
Вампир еле слышно вздыхает. Фонарь полностью освещает его фигуру, что немного
раздражает. Ему неловко просто сидеть на могиле когда-то настолько заинтересовавшей
его девушке. Шу жутко хочется спать, но вновь услышать её голос хочется ещё больше.
Парень смотрит прямо в глаза портрета из-под опущенных ресниц. Ему кажется, что
он слышит её тихий смущённый голос. Два пальца ложатся на глаза портрета Юи, хоть
и не закрывая, но хотя бы перекрывая им обзор на вампира.
Лёгкая, даже немного грустная усмешка отражается на лице Шу.
- Закрой свои глаза, пожалуйста.
Ему показалось, или Юи действительно ему улыбнулась?..
- И не оправдывайся передо мной.
Субару фыркает, смотря на чёрно-белую фотографию Юи. Парень, тихо рыча, кидает
нож на могильную плиту. Он тут же втыкается в неё, оставляя вокруг себя паутину
мелких трещин. Субару, сжав кулаки, приседает напротив портрета и долго всматривается
в знакомые черты лица.
- Ты... Мне кажется, ты не такой, как они.
Парень скалится и уже более вольно располагается напротив камня.
- А ведь ты мне обещала, помнишь? Ты обещала, что убьёшь меня. И что теперь?..
Первые капли дождя упали на щёку Юи. Какому-нибудь сопливому романтику покажется,
что это напоминает её слёзы. В ответ на это Субару вполне может рассмеяться этому
человеку в лицо. Уж он-то знает, что её слёзы не такие. Её слёзы всегда внутри.
Беловолосый улыбается. Он уверен - она его слышит.
Несколько парней стоят около свежей могилы, не решаясь проронить и слова. По разноцветным
зонтам барабанит дождь. Один из вампиров не выдерживает и делает шаг к могиле.
- Она... Умерла навсегда, да? - голос Канато дрожит.
Парень, не получив ответа на свой вопрос, сильнее прижимает к себе своего плюшевого
мишку. Он обводит взглядом братьев и делает ещё одну попытку разрушить эту мёртвую
тишину.
- Но ведь... Рейдзи, ты же когда-то спасал Юи!
Рейдзи качает головой и поправляет очки. На смену всем недавним чувствам пришла
уже знакомая, но тем не менее ненавистная апатия.
Канато благоразумно замолкает и сжимает мягкую лапу Тедди. В голову влетает рассеянная
мысль о том, что Юи всегда говорила с медведем, как со старым знакомым. Пусть
когда-то это и раздражало, но сейчас он был готов простить Юи и это.
Небо просветлело. Райто, отреагировав на это первым, сложил зонт и игриво посмотрел
на белую кошку, сидящую на могиле. Рейдзи, проследив за взглядом брата, понятливо
хмыкнул.
- Слушайте, а вы верите в переселение душ? - голос Райто звучит непривычно громко.
Аято усмехается и смотрит на надпись, которую до этого загораживала кошка. И ему
почему-то кажется, что они немного поспешили.
"Заткнись и спи."'
- 'query: -Мукуро, смотри, смотри - это облачко похоже на барашка.
Иллюзионист открыл глаза, солнце на миг ослепило его, но он все уже умудрился
рассмотреть то самое облачко, о котором говорил Тсуна.
-Мм. Скорее похоже на гору сладкой ваты, чем на барашка.
При этих словах он улыбнулся Саваде, отчего тот сразу же покраснел. Легкая усмешка
сорвалась с губ Хранителя Тумана, ему всегда до безумия нравилось смотреть как
смущается его любимый...босс, нет, любимый.. просто любимое Небушко.Несмотря на
то, что Тсунаёши повзрослел, порой он вел себя как ребенок.
-Знаешь, Мукуро,а мне кажется, что ты будучи иллюзионистом обладаешь довольно
странной фантазией.
Иллюзионист посмотрел на Десятого. Разве мог он рассказать ему, Тсунаёши, о тех
фантазиях, что посещали его, особенно по ночам.
Когда начались их отношения, Мукуро согласился на некоторые условия, которые поставил
перед ним Савада.Он, несмотря на свою горячую итальянскую кровь, страсть, присущую
любому представителю данной национальности, согласился на эти условия. Слишком
поздно он осознал, что его любимый такой стеснительный, что даже порой сорвать
с его губ поцелуй - такая проблема. Несмотря на то, что они жили вместе уже второй
год, Савада оставался...девственником.И снова на губах Рокудо отразилась странная
задумчивая улыбка. Когда он так сильно изменился, когда стал изменять своим принципам?
Раньше, он бы не задумываясь просто взял был бы Вонголу, не спрашивая того - хочет
он этого или нет. Однако многое поменялось, насилие в отношении этого парня он
отмел сразу же. Он ведь любит .Савада любит его, но почему и от чего? Продолжая
смотреть на Тсуну, он задавал себе эти вопросы уже, наверное, в тысячный раз.
Почему он так рад тому, что может вот так спокойно сидеть здесь, в парке на лужайке
и улыбаться ему? Савада Тсунаёши - человек, который изменил его и мир внутри.
Странный мальчик, без особого дарования, не смелый, но и не трусливый .А когда
дело касалось его семьи, друзей - отважнее его не найдешь никого .От чего же он,
Рокудо Мукуро, холодный, циничный, ненавидящий мафию, убийца, так счастлив, находясь
рядом с Десятым боссом Вонголы?
Легкое прикосновение к его руке вывело Рокудо из потока размышлений.
-Мукуро, что-то случилось?
В глазах, этих карамельных глазах, неожиданно отразилось переживание и страх.
-Все в порядке, милый, все хорошо. Просто задумался.
Потянувшись к Саваде,он, обхватив того за талию, усадил себе на колени, нежно
прижимая к своей груди. Пальцы ласково поглаживали волосы, губы трепетно целовали,
заставляя Тсунаёши смущаться еще больше. Податливое, разгоряченное тело парня
сносило крышу у Мукуро и, когда легкий стон вырвался из груди любимого, иллюзионист
скользнул ладонями под рубашку, поглаживая спину, заставляя выгибаться, стонать
и сильнее прижиматься. Губами провел влажную дорожку по шее к ушку.
-Му-ку-ро..
Тсуна дрожал всем телом от возбуждения каждый раз, когда Мукуро ласкал его, ему
хотелось, чтобы тот ни за что и никогда не останавливался.
-Не останавливайся, Мукуро.
От этих слов руки Мукуро замерли, он ведь давно мечтал, чтобы его любимый сказал
ему это.А сейчас он взглянул в глаза Тсунаёши, пальцем провел по щеке и, неожиданно
для себя, прижал его к себе со всей нежностью. Как же хотелось вот так сидеть
с ним, прижимаясь, слушая биение любимого сердца, ощущать ласковые прикосновения
теплых рук, чувствовать дыхание на своей коже, что обжигало и сводило с ума. В
уголках глаз сверкнули слезы. От чувства, что сейчас охватило Рокудо, хотелось
плакать. Какое же это счастье - быть кому-то нужным, важным. Быть любимым.
-Я никогда не остановлюсь, обещаю тебе, мое Небо!
Отстранившись слегка от Мукуро, Тсуна поцеловал того в губы, пальцами стирая слезы
с щек.
-А я никогда не оставлю тебя, мой Туманчик!
Солнце уже коснулось макушек деревьев, а они продолжали сидеть молча, ведь им
не нужны были слова. Они понимали друг друга и без них.
-Джудаймее...Где вы?
Вдали послышали крики Гокудеры. Хранитель Урагана носился по парку в поисках своего
босса.
Рокудо выдохнул, Тсуна улыбнулся, они понимали, что пришло время им возвращаться
в тот мир, который не терпит сентиментальностей и нежностей. Мир мафии жесток
и беспощаден, но эти двое, живя в нем, хранили чувство, которое связало их .И,
несмотря на войны, потери, они пронесут это чувство до того дня, когда их не станет
уже на земле. Но, кто знает, может придет время и они встретятся снова, ибо настоящая
любовь вечна и не угасает, несмотря на года и века.'
- source_sentence: 'query: - Канда! Я ещё согласился на пассив "снизу" ! Но это уже
даже не пассив, это уже без актив какой-то!
- А что ты от меня хочешь-то? Я уже не могу сдерживаться!
- Не вдавливай меня ТАК в стену-то!
- Тч. Мояши, хватит елозить и наслаждайся моментом!
- Но это не честно! По отношению, между прочим, к те...ммммм!!!
- Тч. Мояши, что это?
- Бантик!
- Я вижу, на кой чёрт ты мне его завязал?
- Так ты похож на подарок!
- Счас Мугеном огребёшь!
- А почему ты не спросишь "Кому?"
- Мне это не интересно! А кому?
- Мне!!! ... *Чмок*
- Хм... меня это не устраивает!
- Чего?!!
- Того!
- ММММ!!!
- Комуи! Что это значит? Что с Алленом?
- Что? А, Канда. Аллена Уолкера ранили на миссии!
- Это я уже понял! Что с ним, говори конкретней! - самурай встряхнул начальника.
- Не повышай на меня голос! - возмутился смотритель.
- Вот с вашей сестрой что-нибудь случится, я вам тоже скажу "Не кричите!" Что
с Мояши?
- Эх... на миссии Аллен ослеп. Но не переживай. Это временно! Зрение восстановится!
Месяца через 3!
- 3 МЕСЯЦА?!
- Да! Ты уж не обижай его пока.
- Без вас знаю!
- Ты куда?
- К Аллену, куда же ещё! - грозно рявкнул самурай.
- Ох уж эти голубки...
- К-кто здесь? - Аллен сидел на койке, завернувшись в одеяло.
- ... - шаги приближались.
- Н-не подходи! - а вы бы не испугались, прежде оставшись в одиночестве, среди
акум, в густом лесу, без зрения? То-то же!
- "Не оставлю!"
- Чистая Сила! - занёс руку, предупреждая врага.
- "Ни за что больше не оставлю одного!" Аллен! - подхватить тонкое тельце на руки,
прижать к себе, ладонью накрыв глаза Мояши, как в первом поцелуе. И коснуться
уголка ротика своими губами.
- К-Канда?!
- Не волнуйся! Я стану твоими глазами пока ты продолжаешь быть моим Сердцем Невинности.
А ты уставший идёшь с миссии.
А я уставший иду с тренировки.
Твои ноги истоптаны.
POV Канды.
Моя голова болит.
Твои руки ноют.
Моё сердце истомилось.
И вот мы идём друг на друга, поднимаем грустные, измученные глаза друг к другу.
Ты останавливаешься.
Дурак, что-то говоришь.
Что-то кричишь.
О чём-то молчишь.
Как-то смотришь.
О чём-то волнуешься.
Снова о чём-то кричишь.
С грустью смотришь.
О ком ты мучаешься?
Делаешь шаг, ещё один.
Хватаешь за воротник.
Привстаёшь на носочках.
Целуешь...
Дурак, ты же устал!
Дурак, я же устал!
Я останавливаюсь.
Дурак, что-то отвечаю.
Что-то кричу.
На чём-то замолкаю.
Тупо смотрю.
Что-то щемит.
Опять что-то ору.
Отрешённо смотрю.
За кого-то волнуюсь.
Стою.
Настораживаюсь.
Всё равно.
Неужели?!
Отвечаю на поцелуй.
Как же мы устали!
- Давно?
- Всегда, Мояши.
- Нет, честно, я ненавижу голубей! А особенно белых! - скандалил Лави идя по коридору
Чёрного Ордена.
- Чем же они тебе не нравятся? - спросила девушка-китаянка.
- Да блин, сидят везде где не попадя и сверху какают!
И только они зашли за поворот, как услышали:
- Нет, Канда, прекрати! Нас могут увидеть!
- Да кто нас тут может увидеть, Мояши?
- Ну, кто-нибудь! Ах... Юу!
- Мимо голубятни никто не проходит! Это надёжная часть замка!
- Ну, ну, ну ладно... но почему именно тут?
- А кто верещал что ему романтики не хватает? - Канда недвусмысленно зажимал Аллена
на подоконнике.
Лави и Линали шарахнулись обратно.
- Хотя знаешь, Ли. Может в этих голубках что-то и есть!
Дышать становится всё труднее, меня загоняют в угол. Я испуганно оборачиваюсь
и вижу... его.
- Ты, проклятый! - окрикивает строгий японец.
А я молчу, дыхание сбилось, взволнован. Что ему нужно?
- Не думал что ты опустишься до краж, Мояши.
Что? Краж? Каких краж, Юу?
- Ты о чём?
- Именно ты, о я не сомневаюсь, - он издевается? - ты украл у меня одну вещь.
- Канда, я ничего не брал! - откуда это чувство безысходности? Он остановился.
- Либо верни мне его, либо я заберу твоё! - Что? Я его не понимаю! Что он делает?
Хватает за подбородок, тащит на себя. И... боже, что это? Он... Я чувствую его
губы и сам не пониая - отвечаю! Канда, ты, ты, ты... настоящий вор! Ты крадёшь
моё сердце!
- К-канда... - руки повисли. Не соображаю.
- Верни моё сердце, Грёбанный Стручок!
- Я... ни за что! Я оставлю его себе! А ты... уже забрал моё...
Волновался как-то Аллен, ведь Канда был на миссии. И вот шлёт он ему письмо через
голема.
А: Ты там вообще живой, БаКанда?
К: Да, Аллен. Живой, я живой!
А: Канда! Что с тобой? Тебе плохо? Умираешь? Тебя Линали поцеловала? Держись друг!
К: Ты чё? Со мной всё хорошо, Мояши!
А: Фух... не пугай меня так больше.
- Эй, Канда, не грусти! Это так на тебя не похоже!
- Тебе легко говорить. У вас с Лави всё налаживается. А на меня Уолкер даже не
смотрит!
- Ты поговори с ним!
- Тч, и так каждый день видимся!
- Нет, Юу, поговори с ним, как со мной! Всё образумиться слышишь?
- Не верю я в это.
- А ты поверь! Чудо - бывает!
http://vkontakte.ru/photo63528512_276702591
-Не отдам. Слышите?! Никогда не отдам вам Канду!!!
- Это не тебе решать, Аллен Уолкер!
- Не. Подходите. К. Нам.
- Он принадлежит нам!
- Я... он... УБЬЮ!!!
http://vkontakte.ru/photo63528512_276702661'
sentences:
- 'query: Сегодня, прыгая на кровати, Кира сломала ее. Она отчаянно пыталась допрыгнуть
до потолка, но ничего не получалось, и опилки лишь тщетно сыпались на пол. Никто
не слышал ни скрежета пружин, ни грохота; не было видно и самой поломки.
Мать, ругая дочь, в ответ получила лишь усталое равнодушие, что, конечно же, вывело
ее из себя. Крича что-то нечленораздельное, она стучала ногой по сломанному предмету.
Женщина не понимала, что она делала только хуже, но гнев в ее крови взял верх.
- Да как ты смеешь, паршивая девчонка! Я только и делала, что ухаживала за твоей
кроватью! А ты решила устроить погром?! Знаешь, что?! Я это так не оставлю! -
на этих словах женщина, чуть ли не снимая дверь с петель, выбежала из комнаты.
Кира резко опустилась на колени. Прижав руки к кровати, она пыталась сдерживать
ее невыносимый скрежет.
Взяв молоток и гвозди из кладовой, девочка безнадежно колотила по обломкам, пытаясь
хоть как-то их соединить. Но все оказалось безрезультатно: обломки лишь с еще
большим стремлением раскалывались под гнетом гвоздей.
Она легла на пол. Легкий сквозняк щекотал ее спину.
- Я никогда не смогу допрыгнуть до потолка, - сказала Кира и выдохнула.
- А вдруг это не так?
Кира резво встала. На ее лице появилась маска недоумения, а в груди начал разжигаться
огонек страха. Откуда этот голос?
- Не бойся, глупышка, - голос был очень мягок.
- Откуда ты? Я тебя раньше не слышала...
- А разве это важно?
- А что, нет?
- Почему это должно быть важно? Разве нельзя просто поговорить с тобой?
- Ты думаешь, я буду говорить с незнакомым голосом?
- А почему нет?
- Так. Мне надоедает эта игра в вопросы. Говори, что или кто ты есть?
Внезапно наступило молчание, после чего последовало продолжительное гудение.
Голос начал напевать песенку, не песню, а именно песенку. Любимую песенку Киры,
которую она заводила каждый раз, когда ломалось что-нибудь в ее комнате.
- Я могу построить тебе новую кровать. Гораздо лучше этой. В ней будет много цветов
и сладостей...
Девочка оживилась. В ее речи послышались нотки радости.
- Правда? Ты сделаешь это?
- Да, но вот только...
- Что "только"?
- Только она будет не настоящей. Ты не сможешь на ней спать, но она будет в твоей
комнате. - голос откашлялся. - Ах, да. Кроме тебя ее никто не увидит.
Девочка задумчиво улыбнулась.
- Но когда же я смогу увидеть свою кровать?
Голос начал смеяться. Сильно, долго, но мягко. Этот смех был очень и очень необычен:
вроде бы и добрый, а вроде бы и с насмешкой.
Жалость.
Жалость управляла им.
- Почему ты смеешься?
- Да потому что ты глупая девочка, которая даже не может решить.
- Я вовсе не глупа!
- Да? Так ответь: тебе нужно то, что я предлагаю?
- Но это же вовсе не настоящая кровать! - Кира приложила руки к лицу. - На ней
я не смогу допрыгнуть до потолка!
Голос опять залился смехом.
- ПОЧЕМУ ТЫ СМЕЕШЬСЯ ВСЕ ВРЕМЯ?!
- Да потому что ты уже решила. Уже давным-давно решила.
- И что же я решила?
- Ты согласна, ведь так?
Кира замешкалась, но, все же, выдавила из себя неуверенное "да".
Голос пропал, оставив после себя огромную кровать, с большим матрасом и мягкими
подушками. На такой кровати, определенно, можно было бы допрыгнуть до потолка.'
- 'query: Конец года - это пора для радости, в предчувствии надвигающихся каникул,
свободы. Это было начало мая, когда на улице уже тепло, а по утрам зябко. Когда
цветы уже расцвели и начали благоухать. Сырая земля покрывалась травиночками,
и по ней туда-сюда сновали букашки-таракашки.
Птицы летали над деревьями, чирикая и стрекоча, а какая-то особенно усердно напевала:
~ midori tanabiku namimori no
dainaku shounaku nami ga ii
itsumo kawaranu
sukoyaka kenage
aa~
tomo ni utaou
namimorichuu ~
Да... это была та самая чокнутая птичка, хозяином которой был не мене чокнутый
Хибари Кёя. Хотя назвать его так прилюдно ни у кого бы язык не повернулся... ну,
почти ни у кого.
Времена школьной поры прошли, и теперь настали не менее насыщенные времена студенчества.
Так уж получилось, судьбы злая шутка, что бедного Саваду Тсунаёши перенаправили
в университет, где главой дисциплинарного комитета был страх и ужас его жизни
- Хибари Кёя! Ну, разумеется после репетитора... но не об этом сейчас. Любопытно,
что бедного Саваду Тсунаёши, ошибочно, запихнули сразу на 2 курс! М-да... не повезло
ребёнку...
Но тут фортуна повернулась к нему своим рылом, и в его классе он повстречал замечательного
человека - Аллена Уолкера.
С ним они мигом сдружились и стали, не разлей вода. Но это было осенью, а теперь
весна! А это значит...
Сцена 1. Дубль 1.
- Тсуна, не переживай ты так! Сдашь ты эти экзамены! Ведь и я, и твой репетитор
занимались с тобой весь учебный год! Ты даже начал понимать азы электрофизики!
- успокаивал вечно лояльный седой, поглаживая Тсуну по пушистой каштановой шевелюре.
- Ну, а если что, останешься на второй год! Вон, некоторые так уже 3 раза делали!
- кивнул он на Канду, что сидел у окна в конце класса.
Канда Юу, о-о-о! Это, вообще, отдельная история! Хулиган, отличник, красавец,
последняя скотина, человек чести, бездарь, гроза всех и вся... чувства смешанные.
Как всё это и ещё много "положительных" качеств находятся в одном человеке, Аллен
отказывался понимать!
- Но он хотя бы крутой, и отличник, а я как был никчемным, таким и останусь. Мне
не сдать эти экзамены, ни за что в жизни! - продолжал страдать Тсуна, схватившись
за голову. Таким он был, слишком неуверенным в себе, пессимистичным, а ещё последним
неудачником... список можно продолжить. Но в тоже время, ради друзей он был готов
на многое! Его отзывчивость, доброта не знала границ. Если кто-то обижал его друзей,
его глаза становились оранжевыми, а сам он серьёзным и мега-сильным.
- БаКанда-то?! Ха-ха-ха! - рассмеялся Уолкер. - Дурак дураком! Он просто везунчик
с репутацией и внешностью! И всё! - он многозначительно хмыкнул. - А ты, ты добрый
и милый! Просто будь поувереннее в себе, и всё получится!
- Эх, и как ему удается быть таким уверенным? У меня так не получается... - вздохнул
Савада, посмотрев на Канду. - Да, и при этом он ничего не делает, лишь сидит на
своём месте, но все девчонки возле него вьются.
Но тут, вдруг, Канда посмотрел в их сторону, а Тсуна тут же отвернулся и сжался,
будто его только что облили ледяной водой.
- Фух...
Аллен тоже посмотрел на Канду и, показав ему язык, отвернулся.
- Пф! И что они в нём нашли, не понима... - вот теперь уже Аллен замер уставившись
на дверной проём, откуда излучалась аура смерти. Это был Хибари Кёя
"Что ему нужно?!"
Сцена 2. Дубль 1.
- Кто... кто посмел прийти в университет без сменки?!!!
Тут Уолкер посмотрел на пол и вздрогнул. Грязь! Лужи грязи от военных сапог, а
такие сапоги только у...
- Канда Юу! - взревел Кёя.
Но парень лишь одарил его своим обычным, равнодушным взглядом, полным холода.
- Что-то не так?
- Ты, травоядное! - подойдя к Канде, Хибари ласково отодвинул парту. - Ты ответишь
за то, что испачкал полы! - он насквозь прожигал взглядом.
- Хм, ещё чего, - гордые синие глаза пронизывали холодом в ответ. Вдобавок он
закинул ногу на ногу. - Сменка порвалась, другой я не нашёл, пришлось идти в университет
так.
- Да плевать я хотел! Босиком ходи! А помещение пачкать не смей! - рычал Кёя.
- Завтра так и сделаю - фыркнул тот. - Это всё?
- Будешь неделю мыть полы в этом коридоре! - нахмурился глава дисциплинарного
комитета. - И начнёшь, прямо сейчас!
- Тч, не намерен. Для этого есть уборщицы, и... - бросил короткий взгляд в сторону
Уолкера и Тсуны. - Дежурные.
- Чего-о-о?! - возмутился Уолкер. - За коридор мы не отвечаем!
- Хм, - хмыкнул Кёя, и мальчик решил помолчать. - Ты запачкал ты и убирай, а иначе...
- глаза сверкнули не по-доброму. - Камикорос!
- Нет желания драться, но раз ты настаиваешь! - Канда поднялся с места, смотря
на парня с вызовом. Он не собирался отдавать своему главному сопернику звание
грозы университета.
- О, это будет интересно, - злорадно ухмыльнулся. - Все вон! Пока не перебил.
Весь класс, что жался по стеночкам, моментально высыпал в коридор. Кроме Тсуны
и Аллена, что заворожено наблюдали за событиями. Савада со страху вцепился в руку
Уолкера, а сам парень обеспокоенно смотрел в сторону длинноволосого японца. -
Юу... - тихо позвал он.
- Правильно, свидетели ни к чему, - так же ухмыльнулся Канда, разминая руки. -
Вы, двое, разве не ясно было сказано? - глянул он в сторону парней.
- Аллен, может... - тихо проскулил Тсуна, прекрасно знавший нрав Хибари.
Белобрысый, что всё это время переводил взгляд с Кёя на Канду, вздохнул, опустив
глаза, и поддался на уговоры Савады, позволив утащить себя в коридор.
Сцена 3. Дубль 1.
- Хе... - Хибари странно хмыкнул, краем глаза наблюдая за ушедшими.
Канда так же молча, проводил подростков взглядом и вновь посмотрел на своего противника.
Эта ухмылка ни о чём добром не говорила.
Хибари неожиданно ударил парня в живот так, что тот отлетел к окну.
- Тч... - Канда согнулся, но быстро пришёл в себя и поднялся. Последовал ответный
удар.
- Хм... слабак! - Кёя быстро блокировал этот удар и подсечкой сбил противника
с ног.
Юу не растерялся и ударил его по ногам, тоже завалив на пол и сел на него, как
на скамейку. Потом поднялся и заломил тому руки, пригибая к полу.
- Бесишь!
Хибари вывернулся и с разворота ударил по лицу.
- Травоядные должны молчать и подчиняться!
- Я такой сволочи подчиняться не собираюсь! - удар в бок по печени.
Кёя ударил по голове тонфа.
- А тебя никто не спрашивает! Дисциплина на первом месте!
Канда заехал ногой в живот. Сапогами это очень жестоко.
- Пока меня никто не трогает, я спокоен!
- Пока не нарушаешь правила, спокоен я! - парень с силой ударил по солнечному
сплетению.
- Кх... ублюдок, - сморщился Юу.
- Тоже мне - наглый! Думаешь, хуй отрастил, и тебе всё дозволено?! - прорычал
Хибари.
- Говори, что хочешь, но полы мыть я не собираюсь, - тем же тоном ответил противник.
- Но таки вымоешь! - снова ударил глава дисциплинарного комитета.
- Завтра вообще не явлюсь. И плакал ваш кубок за первое место по баскетболу, -
вытерпел Канда.
- Ты мне тут не угрожай! Незаменимых людей не бывает! А тем более тебя заменить
- раз плюнуть!
- Ох, тогда это же отлично! Завтра целый день проваляюсь в кровати и не увижу
этого мелкого. Задолбал пялиться.
- М? А причём тут Аллен?! - Кёя вскинул бровь.
- Притом, что достал, - вздохнул Юу, - странный он какой-то. И смотрит на меня
как-то странно.
- Радовался бы! Все остальные от тебя шарахаются. С такими темпами и до онанизма
недалеко, или ты уже? - усмехнулся Хибари.
- Пф, нет... и что вообще за вопросы? Уолкер меня в последнюю очередь интересует.
- Я не об этой козявке говорю! А про то, что с твоим характером ни одна девушка
к тебе не подойдёт!
- Хе, - усмехнулся Канда. - Спорим, я любую за день смогу закадрить? И заняться
сексом.
- Ты-то? Ха! И за месяц не справишься! - оскалился Кёя.
- Так значит, спорим? - приподнялся Юу. - Но тогда и ты участвуешь.
- Хе, даю тебе неделю! - Хибари убрал тонфа и протянул свою руку.
- Договорились, - пожал руку тот. - И кто станет целью?
- Хм... а тот, кто первый войдёт в этот кабинет! Чтоб уж честно было. В подтверждение
победы принесу тебе нижнее бельё жертвы! - глава дисциплинарного комитета крепче
сжал руку и, рванув на себя, перекинул Канду через спину на пол. - Но учти, если
ты проиграешь, будешь драить университет весь год!
- Тч... ладно - Юу поднялся, держась за спину. - Я тебе это не прощу.
Тут в дверь тихонько постучались.
- А если выиграешь ты, я на год от тебя отстану! - хмыкнул Кёя и повернулся к
двери.
- Хибари-сан! Я, конечно, понимаю, что дисциплина - это святое, и поддерживаю
ваше решение надрать этому придурку зад! Но у нас тут урок, а мне реферат сдавать!
- зашёл безупречный Аллен Уолкер, в которого намертво вцепился Савада пытаясь
остановить.
Сцена 4. Дубль 1.
Канда посмотрел на мальчишку и издал тихий звук, похожий на кошачье шипение. Видимо
он не ожидал, что первыми в класс зайдут именно эти двое.
- Проходи, зашёл уже, - хмыкнул Юу и, отвесив Хибари подзатыльник, поспешил вернуться
на своё место.
- О, ты ещё живой?! Печально... - покачал головой Аллен. - Ребята заходите, Хибари
ушёл! - тут же в дверь повалили остальные. И последним зашёл преподаватель. Беловолосый
достал из сумки рисунки и чертежи, после чего развесил, взял указку и начал рассказывать
реферат по экологии.
Вообще, он не был отличником, но большим трудягой!
Если раньше Юу мечтал отсидеть последние уроки и свалить домой, то теперь его
желанием было, чтобы уроки никогда не заканчивались.
"Тч, Шпендель. Почему, почему ты так не вовремя свалился мне на голову?!" - думал
он, делая вид, что слушает.
- ... И вот поэтому для спасения китов так важно прекратить стрельбу и перевоз
нефти через океан! У меня всё! - закончил рассказ.
- Ну что ж, думаю, на 4-ку вполне хватит.
- Что?! Но учитель, у него потрясающий доклад! - защебетал одногруппник.
- Он много готовился, волновался, почему четыре?! - заступился за Аллена Тсуна.
- Да прекрасный доклад, если честно, не ожидал, - высказался человек, которого
меньше всего это могло волновать. Канда смотрел на преподавателя.
- Э-Э-Э?! - ошалел весь класс.
- А... э-эт-то... - залился румянцем Уолкер.
- Ну ладно, 5!
Юу, победно ухмыльнувшись, перевел взгляд на Аллена. Тот потупился и уставился
в пол.
"Хм, возможно это будет не так уж и ужасно", - почему-то только сейчас Кан'
- 'query: - Доброе утро, - шепот щекочет мне ухо. Совсем не хочется разлеплять глаза
и встречать новый день. Поворачиваюсь, притягивая тебя ближе, и утыкаюсь носом
тебе в грудь, ощущая запах сладостей, которые нравятся нам обоим. Я ежусь от холода,
пытаясь вслепую найти края уютного одеяла и снова окунуться в сон. Ты замечаешь
это и заботливо укрываешь меня. Твои пальцы перебирают мои волосы, а губы легко
касаются моего лба. Мы так и застываем в этой позе на некоторое время.
Проходит всего несколько минут, а потом я резко сажусь на кровати и начинаю ворчать,
что уже давно пора вставать, ведь сегодня предстоит поездка на природу вместе
с друзьями. У тебя на лице появляется улыбка, а руки тянут обратно, заставляя
вновь откинуться на подушки. На улице льет дождь, барабаня в окна, а что еще делать
в такой день, если не нежиться в уютной постели в объятиях любимого?
Сколько времени мы были знакомы, прежде, чем узнали о чувствах друг друга? Да,
я не помню этого, но кто считает? Главное, что в моей памяти до сих пор бережно
хранится момент, когда ты наконец услышал те важные слова. Перед глазами всплывают
счастливые мгновения, словно кадры, запечатлевшие всё в мельчайших деталях. Это
произошло в морозный январский день. Весёлая компания молодых людей не могла просто
сидеть дома взаперти и упустить такой хороший случай для прогулки по заснеженному
лесу и прочих зимних забав.
Ты тогда оказался вне нашего поля зрения, а темнота уже начала опускаться на землю.
Конечно, мне ничего не оставалось, кроме как отправиться на поиски. На моем лице
застыло удивление, когда я застал тебя за странным занятием: было забавно наблюдать
за тобой, выводящим акварелью и баллончиками с краской некие узоры прямо на снегу.
Твои необычность и непредсказуемость притягивали к себе мою натуру.
- Ты мне нравишься. Очень, - кажется, будто всё замерло, и в звенящей тишине прозвучали
простые слова, которые тяжело произнести. Что могло толкнуть меня просто взять
и сказать их? Однако ответ на этот вопрос уже не важен, теперь он оставил место
для беспокойства. Твои эмоции сложно прочитать. Так было всегда. Молчание нагнетает
напряжение между нами.
Прикосновение ледяных пальцев к моей щеке выводит из оцепенения, сковавшего тело.
Я еле-еле различаю, что ты сейчас говоришь, но некоторые обрывки фраз всё же приобретают
смысл. Никогда не верил в чудеса, да вот только сейчас понимаю: они случаются.
Маленькое чудо - узнать об ответных чувствах того, кто так много значит для тебя.
Мы идем с тобой по заметенным снегом улицам. Вьюга, завывая, дует в лицо, сбивая
прохожих с пути, а у меня на душе - спокойствие и умиротворение... Когда ты рядом,
происходящее вокруг не имеет значения, и нет дела до всех остальных.
Мне слышно, как твои зубы стучат от холода. Сжавшись, ты прячешь нос в высокий
ворот куртки. Я уверен, что твои руки в карманах давно не могут отогреться и принять
нормальную температуру.
- Замерз? - спрашиваю, заглядывая в карие глаза, обрамленные черными ресницами,
на которые тихо падают снежинки, и, не дожидаясь ответа, тяну тебя в ближайшее
кафе.
- Пойдем домой, а то воспаление легких подхватишь, - строго замечаешь ты, уже
направляясь в сторону нашего подъезда.
- Постой, разве не видишь, какая чудесная погода? - знаешь ведь, что мне нравится
гулять под дождем, подставляя лицо падающим холодным каплям.
Тебе в голову быстро приходит мысль, как заставить меня уйти в более сухое и теплое
место. Долго не раздумывая, рывком притягиваешь к себе, прижимаясь к моим губам.
От неожиданности я приоткрываю их, а руками начинаю гладить твою спину, к которой
прилипла изрядно промокшая рубашка. Не спеша, ты углубляешь поцелуй, еще больше
раззадоривая. Именно так и предполагалось, правда?
Кое-как справившись с замком, мы вваливаемся в полутемную квартиру, едва сумев
устоять на ногах. Перед глазами до сих пор стоит пелена дождя. Ты сразу же резко
прижимаешь меня к стене, и твой язык врывается в мой рот в неистовом поцелуе,
беспорядочно двигается вдоль зубов и возвращается к моему языку. Я не стремлюсь
брать инициативу на себя, мне всегда нравилось плавиться под натиском твоих ласк
и ожидать, что же ты предпримешь дальше. У тебя почти всегда ледяные пальцы, и
у меня мурашки бегут по коже от приятных, но холодных прикосновений. Тебе нравится
смотреть, как прогибается моя спина, когда ты рисуешь на ней невидимые линии.
В джинсах уже становится тесно, а в голове образуется пустота, заполняемая лишь
тобой. Твои руки опускаются ниже, нащупывая пряжку ремня. О, ты же сам затеял
эту игру, малыш, так давай поиграем?
Я, всё так же находясь в крепких объятиях, делаю неожиданный разворот, привычно
занимая роль актива. Ты с замиранием сердца смотришь на меня, прекратив все действия.
Улыбнувшись, провожу языком по твоему уху, чуть прикусывая мочку, от чего твои
дрожащие пальцы перемещаются вверх и судорожно сжимают мои волосы, с которых стекает
вода. Нетерпеливо расстегиваю пуговицы твоей рубашки, попутно оставляя несколько
багровых отметин на шее и на груди. До меня доносится стон, и я продолжаю медленную
пытку, стягивая с тебя брюки вместе с бельем. В тишине, разбавляемой нашим тяжелым
дыханием, раздается шумный выдох, когда я делаю несколько движений рукой по основанию
члена, а затем, лизнув головку, отстраняюсь, глядя в одурманенные глаза.
- Спальня, - шепчешь ты, крепко держась за край тумбочки, стоявшей рядом.
Просить дважды нет смысла, ведь у меня самого уже нет сил терпеть это тянущее
ощущение, образовавшееся внизу живота.
Легко подхватываю тебя на руки и иду в ту комнату, в которой мы столько раз занимались
любовью.
Кровать встречает нас знакомым скрипом, когда я опускаю тебя, нервно кусающего
губы. Ты хватаешь меня и тянешь на себя, отчего оказываешься прижатым моим телом.
Твои руки скользят по моим бокам, помогая снять футболку и, приложив некоторые
усилия, приподнявшись, обводишь языком мои соски, слегка царапая их зубами.
Чувствуя необходимость скорейшей разрядки, я пытаюсь как можно скорее снять джинсы
и нашарить в ящичке шкафа смазку и презервативы. Нетерпеливо устраиваюсь поудобнее
между твоих ног, разводя их в стороны и немного сгибая в коленях. Выдавливаю гель
и поочередно аккуратно ввожу в тебя пальцы, растягивая проход. Слышу твое сдавленное
шипение и стараюсь отвлечь своими ласками, покрывая грудь и плечи поцелуями, кое-где
чуть прикусывая кожу.
Ты заерзал и недовольно уставился на меня, требуя большего. Я с удовольствием
подчиняюсь. Приставляю член ко входу и медленно вхожу, на что получаю еле заметный
кивок, как разрешение продолжать. Спустя несколько толчков ты выгибаешься в позвоночнике,
и на моем лице появляется улыбка. Я увеличиваю темп, двигаясь всё быстрее.
Оргазм стремительно накрывает нас с головой, даря столь долгожданное наслаждение.
Со сбившимся дыханием, со звездочками в глазах падаю рядом с тобой, раскрасневшимся,
тяжело дышащим, но таким любимым. Ты прижимаешься ко мне, положив голову на мою
грудь. Делать сейчас что-либо выше всяких сил - я продолжаю лежать, поглаживая
твои волосы и вслушиваясь в биение наших сердец.
Почему я тебя тогда не послушал? Зачем позволил тебе мокнуть под дождем вместе
со мной? Если бы не эта ошибка, ты бы не подхватил серьезную болезнь. Меня до
сих пор терзает чувство вины. Очень тяжело осознавать, что погубил чью-то жизнь...
Особенно того, кто был центром моей Вселенной.
Я продолжаю жить прошлым, не могу не вспоминать те немногие, но такие дорогие
моему сердцу моменты, проведенные с тобой. Мы совсем недолго были вместе. Меня
часто можно встретить на том самом месте в лесу, где я открылся тебе. Иногда мне
кажется, что сквозь сильную метель вижу твой силуэт. Ты улыбаешься и делаешь несколько
шагов навстречу, а потом исчезаешь...
Если бы только была возможность еще раз услышать такое теплое "доброе утро", почувствовать
горячее дыхание, щекочущее ухо, хоть что-нибудь...
Пока ты был рядом, было совсем не важно, что происходит вокруг. Но теперь, когда
я наблюдаю за ненастной погодой в окно, у меня нет светлых мыслей и легкости,
что возникали раньше. Даже летом мое сердце сковывает лед, который уже не удастся
растопить.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.9225280326197758
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7901061773300171
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.7559554803436604
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7817596793174744
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.756201575623413
name: Cosine Precision
- type: cosine_recall
value: 0.7557095451883662
name: Cosine Recall
- type: cosine_ap
value: 0.8478615501518483
name: Cosine Ap
- type: cosine_mcc
value: 0.7071656901034916
name: Cosine Mcc
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'query: - Канда! Я ещё согласился на пассив "снизу" ! Но это уже даже не пассив, это уже без актив какой-то!\n- А что ты от меня хочешь-то? Я уже не могу сдерживаться!\n- Не вдавливай меня ТАК в стену-то!\n- Тч. Мояши, хватит елозить и наслаждайся моментом!\n- Но это не честно! По отношению, между прочим, к те...ммммм!!!\n- Тч. Мояши, что это?\n- Бантик!\n- Я вижу, на кой чёрт ты мне его завязал?\n- Так ты похож на подарок!\n- Счас Мугеном огребёшь!\n- А почему ты не спросишь "Кому?"\n- Мне это не интересно! А кому?\n- Мне!!! ... *Чмок*\n- Хм... меня это не устраивает!\n- Чего?!!\n- Того!\n- ММММ!!!\n- Комуи! Что это значит? Что с Алленом?\n- Что? А, Канда. Аллена Уолкера ранили на миссии!\n- Это я уже понял! Что с ним, говори конкретней! - самурай встряхнул начальника.\n- Не повышай на меня голос! - возмутился смотритель.\n- Вот с вашей сестрой что-нибудь случится, я вам тоже скажу "Не кричите!" Что с Мояши?\n- Эх... на миссии Аллен ослеп. Но не переживай. Это временно! Зрение восстановится! Месяца через 3!\n- 3 МЕСЯЦА?!\n- Да! Ты уж не обижай его пока.\n- Без вас знаю!\n- Ты куда?\n- К Аллену, куда же ещё! - грозно рявкнул самурай.\n- Ох уж эти голубки...\n- К-кто здесь? - Аллен сидел на койке, завернувшись в одеяло.\n- ... - шаги приближались.\n- Н-не подходи! - а вы бы не испугались, прежде оставшись в одиночестве, среди акум, в густом лесу, без зрения? То-то же!\n- "Не оставлю!"\n- Чистая Сила! - занёс руку, предупреждая врага.\n- "Ни за что больше не оставлю одного!" Аллен! - подхватить тонкое тельце на руки, прижать к себе, ладонью накрыв глаза Мояши, как в первом поцелуе. И коснуться уголка ротика своими губами.\n- К-Канда?!\n- Не волнуйся! Я стану твоими глазами пока ты продолжаешь быть моим Сердцем Невинности.\nА ты уставший идёшь с миссии.\nА я уставший иду с тренировки.\nТвои ноги истоптаны.\nPOV Канды.\nМоя голова болит.\nТвои руки ноют.\nМоё сердце истомилось.\nИ вот мы идём друг на друга, поднимаем грустные, измученные глаза друг к другу.\nТы останавливаешься.\nДурак, что-то говоришь.\nЧто-то кричишь.\nО чём-то молчишь.\nКак-то смотришь.\nО чём-то волнуешься.\nСнова о чём-то кричишь.\nС грустью смотришь.\nО ком ты мучаешься?\nДелаешь шаг, ещё один.\nХватаешь за воротник.\nПривстаёшь на носочках.\nЦелуешь...\nДурак, ты же устал!\nДурак, я же устал!\nЯ останавливаюсь.\nДурак, что-то отвечаю.\nЧто-то кричу.\nНа чём-то замолкаю.\nТупо смотрю.\nЧто-то щемит.\nОпять что-то ору.\nОтрешённо смотрю.\nЗа кого-то волнуюсь.\nСтою.\nНастораживаюсь.\nВсё равно.\nНеужели?!\nОтвечаю на поцелуй.\nКак же мы устали!\n- Давно?\n- Всегда, Мояши.\n- Нет, честно, я ненавижу голубей! А особенно белых! - скандалил Лави идя по коридору Чёрного Ордена.\n- Чем же они тебе не нравятся? - спросила девушка-китаянка.\n- Да блин, сидят везде где не попадя и сверху какают!\nИ только они зашли за поворот, как услышали:\n- Нет, Канда, прекрати! Нас могут увидеть!\n- Да кто нас тут может увидеть, Мояши?\n- Ну, кто-нибудь! Ах... Юу!\n- Мимо голубятни никто не проходит! Это надёжная часть замка!\n- Ну, ну, ну ладно... но почему именно тут?\n- А кто верещал что ему романтики не хватает? - Канда недвусмысленно зажимал Аллена на подоконнике.\nЛави и Линали шарахнулись обратно.\n- Хотя знаешь, Ли. Может в этих голубках что-то и есть!\nДышать становится всё труднее, меня загоняют в угол. Я испуганно оборачиваюсь и вижу... его.\n- Ты, проклятый! - окрикивает строгий японец.\nА я молчу, дыхание сбилось, взволнован. Что ему нужно?\n- Не думал что ты опустишься до краж, Мояши.\nЧто? Краж? Каких краж, Юу?\n- Ты о чём?\n- Именно ты, о я не сомневаюсь, - он издевается? - ты украл у меня одну вещь.\n- Канда, я ничего не брал! - откуда это чувство безысходности? Он остановился.\n- Либо верни мне его, либо я заберу твоё! - Что? Я его не понимаю! Что он делает? Хватает за подбородок, тащит на себя. И... боже, что это? Он... Я чувствую его губы и сам не пониая - отвечаю! Канда, ты, ты, ты... настоящий вор! Ты крадёшь моё сердце!\n- К-канда... - руки повисли. Не соображаю.\n- Верни моё сердце, Грёбанный Стручок!\n- Я... ни за что! Я оставлю его себе! А ты... уже забрал моё...\nВолновался как-то Аллен, ведь Канда был на миссии. И вот шлёт он ему письмо через голема.\nА: Ты там вообще живой, БаКанда?\nК: Да, Аллен. Живой, я живой!\nА: Канда! Что с тобой? Тебе плохо? Умираешь? Тебя Линали поцеловала? Держись друг!\nК: Ты чё? Со мной всё хорошо, Мояши!\nА: Фух... не пугай меня так больше.\n- Эй, Канда, не грусти! Это так на тебя не похоже!\n- Тебе легко говорить. У вас с Лави всё налаживается. А на меня Уолкер даже не смотрит!\n- Ты поговори с ним!\n- Тч, и так каждый день видимся!\n- Нет, Юу, поговори с ним, как со мной! Всё образумиться слышишь?\n- Не верю я в это.\n- А ты поверь! Чудо - бывает!\nhttp://vkontakte.ru/photo63528512_276702591\n-Не отдам. Слышите?! Никогда не отдам вам Канду!!!\n- Это не тебе решать, Аллен Уолкер!\n- Не. Подходите. К. Нам.\n- Он принадлежит нам!\n- Я... он... УБЬЮ!!!\nhttp://vkontakte.ru/photo63528512_276702661',
'query: Конец года - это пора для радости, в предчувствии надвигающихся каникул, свободы. Это было начало мая, когда на улице уже тепло, а по утрам зябко. Когда цветы уже расцвели и начали благоухать. Сырая земля покрывалась травиночками, и по ней туда-сюда сновали букашки-таракашки.\nПтицы летали над деревьями, чирикая и стрекоча, а какая-то особенно усердно напевала:\n~ midori tanabiku namimori no\ndainaku shounaku nami ga ii\nitsumo kawaranu\nsukoyaka kenage\naa~\ntomo ni utaou\nnamimorichuu ~\nДа... это была та самая чокнутая птичка, хозяином которой был не мене чокнутый Хибари Кёя. Хотя назвать его так прилюдно ни у кого бы язык не повернулся... ну, почти ни у кого.\nВремена школьной поры прошли, и теперь настали не менее насыщенные времена студенчества. Так уж получилось, судьбы злая шутка, что бедного Саваду Тсунаёши перенаправили в университет, где главой дисциплинарного комитета был страх и ужас его жизни - Хибари Кёя! Ну, разумеется после репетитора... но не об этом сейчас. Любопытно, что бедного Саваду Тсунаёши, ошибочно, запихнули сразу на 2 курс! М-да... не повезло ребёнку...\nНо тут фортуна повернулась к нему своим рылом, и в его классе он повстречал замечательного человека - Аллена Уолкера.\nС ним они мигом сдружились и стали, не разлей вода. Но это было осенью, а теперь весна! А это значит...\nСцена 1. Дубль 1.\n- Тсуна, не переживай ты так! Сдашь ты эти экзамены! Ведь и я, и твой репетитор занимались с тобой весь учебный год! Ты даже начал понимать азы электрофизики! - успокаивал вечно лояльный седой, поглаживая Тсуну по пушистой каштановой шевелюре. - Ну, а если что, останешься на второй год! Вон, некоторые так уже 3 раза делали! - кивнул он на Канду, что сидел у окна в конце класса.\nКанда Юу, о-о-о! Это, вообще, отдельная история! Хулиган, отличник, красавец, последняя скотина, человек чести, бездарь, гроза всех и вся... чувства смешанные. Как всё это и ещё много "положительных" качеств находятся в одном человеке, Аллен отказывался понимать!\n- Но он хотя бы крутой, и отличник, а я как был никчемным, таким и останусь. Мне не сдать эти экзамены, ни за что в жизни! - продолжал страдать Тсуна, схватившись за голову. Таким он был, слишком неуверенным в себе, пессимистичным, а ещё последним неудачником... список можно продолжить. Но в тоже время, ради друзей он был готов на многое! Его отзывчивость, доброта не знала границ. Если кто-то обижал его друзей, его глаза становились оранжевыми, а сам он серьёзным и мега-сильным.\n- БаКанда-то?! Ха-ха-ха! - рассмеялся Уолкер. - Дурак дураком! Он просто везунчик с репутацией и внешностью! И всё! - он многозначительно хмыкнул. - А ты, ты добрый и милый! Просто будь поувереннее в себе, и всё получится!\n- Эх, и как ему удается быть таким уверенным? У меня так не получается... - вздохнул Савада, посмотрев на Канду. - Да, и при этом он ничего не делает, лишь сидит на своём месте, но все девчонки возле него вьются.\nНо тут, вдруг, Канда посмотрел в их сторону, а Тсуна тут же отвернулся и сжался, будто его только что облили ледяной водой.\n- Фух...\nАллен тоже посмотрел на Канду и, показав ему язык, отвернулся.\n- Пф! И что они в нём нашли, не понима... - вот теперь уже Аллен замер уставившись на дверной проём, откуда излучалась аура смерти. Это был Хибари Кёя\n"Что ему нужно?!"\nСцена 2. Дубль 1.\n- Кто... кто посмел прийти в университет без сменки?!!!\nТут Уолкер посмотрел на пол и вздрогнул. Грязь! Лужи грязи от военных сапог, а такие сапоги только у...\n- Канда Юу! - взревел Кёя.\nНо парень лишь одарил его своим обычным, равнодушным взглядом, полным холода.\n- Что-то не так?\n- Ты, травоядное! - подойдя к Канде, Хибари ласково отодвинул парту. - Ты ответишь за то, что испачкал полы! - он насквозь прожигал взглядом.\n- Хм, ещё чего, - гордые синие глаза пронизывали холодом в ответ. Вдобавок он закинул ногу на ногу. - Сменка порвалась, другой я не нашёл, пришлось идти в университет так.\n- Да плевать я хотел! Босиком ходи! А помещение пачкать не смей! - рычал Кёя.\n- Завтра так и сделаю - фыркнул тот. - Это всё?\n- Будешь неделю мыть полы в этом коридоре! - нахмурился глава дисциплинарного комитета. - И начнёшь, прямо сейчас!\n- Тч, не намерен. Для этого есть уборщицы, и... - бросил короткий взгляд в сторону Уолкера и Тсуны. - Дежурные.\n- Чего-о-о?! - возмутился Уолкер. - За коридор мы не отвечаем!\n- Хм, - хмыкнул Кёя, и мальчик решил помолчать. - Ты запачкал ты и убирай, а иначе... - глаза сверкнули не по-доброму. - Камикорос!\n- Нет желания драться, но раз ты настаиваешь! - Канда поднялся с места, смотря на парня с вызовом. Он не собирался отдавать своему главному сопернику звание грозы университета.\n- О, это будет интересно, - злорадно ухмыльнулся. - Все вон! Пока не перебил.\nВесь класс, что жался по стеночкам, моментально высыпал в коридор. Кроме Тсуны и Аллена, что заворожено наблюдали за событиями. Савада со страху вцепился в руку Уолкера, а сам парень обеспокоенно смотрел в сторону длинноволосого японца. - Юу... - тихо позвал он.\n- Правильно, свидетели ни к чему, - так же ухмыльнулся Канда, разминая руки. - Вы, двое, разве не ясно было сказано? - глянул он в сторону парней.\n- Аллен, может... - тихо проскулил Тсуна, прекрасно знавший нрав Хибари.\nБелобрысый, что всё это время переводил взгляд с Кёя на Канду, вздохнул, опустив глаза, и поддался на уговоры Савады, позволив утащить себя в коридор.\nСцена 3. Дубль 1.\n- Хе... - Хибари странно хмыкнул, краем глаза наблюдая за ушедшими.\nКанда так же молча, проводил подростков взглядом и вновь посмотрел на своего противника. Эта ухмылка ни о чём добром не говорила.\nХибари неожиданно ударил парня в живот так, что тот отлетел к окну.\n- Тч... - Канда согнулся, но быстро пришёл в себя и поднялся. Последовал ответный удар.\n- Хм... слабак! - Кёя быстро блокировал этот удар и подсечкой сбил противника с ног.\nЮу не растерялся и ударил его по ногам, тоже завалив на пол и сел на него, как на скамейку. Потом поднялся и заломил тому руки, пригибая к полу.\n- Бесишь!\nХибари вывернулся и с разворота ударил по лицу.\n- Травоядные должны молчать и подчиняться!\n- Я такой сволочи подчиняться не собираюсь! - удар в бок по печени.\nКёя ударил по голове тонфа.\n- А тебя никто не спрашивает! Дисциплина на первом месте!\nКанда заехал ногой в живот. Сапогами это очень жестоко.\n- Пока меня никто не трогает, я спокоен!\n- Пока не нарушаешь правила, спокоен я! - парень с силой ударил по солнечному сплетению.\n- Кх... ублюдок, - сморщился Юу.\n- Тоже мне - наглый! Думаешь, хуй отрастил, и тебе всё дозволено?! - прорычал Хибари.\n- Говори, что хочешь, но полы мыть я не собираюсь, - тем же тоном ответил противник.\n- Но таки вымоешь! - снова ударил глава дисциплинарного комитета.\n- Завтра вообще не явлюсь. И плакал ваш кубок за первое место по баскетболу, - вытерпел Канда.\n- Ты мне тут не угрожай! Незаменимых людей не бывает! А тем более тебя заменить - раз плюнуть!\n- Ох, тогда это же отлично! Завтра целый день проваляюсь в кровати и не увижу этого мелкого. Задолбал пялиться.\n- М? А причём тут Аллен?! - Кёя вскинул бровь.\n- Притом, что достал, - вздохнул Юу, - странный он какой-то. И смотрит на меня как-то странно.\n- Радовался бы! Все остальные от тебя шарахаются. С такими темпами и до онанизма недалеко, или ты уже? - усмехнулся Хибари.\n- Пф, нет... и что вообще за вопросы? Уолкер меня в последнюю очередь интересует.\n- Я не об этой козявке говорю! А про то, что с твоим характером ни одна девушка к тебе не подойдёт!\n- Хе, - усмехнулся Канда. - Спорим, я любую за день смогу закадрить? И заняться сексом.\n- Ты-то? Ха! И за месяц не справишься! - оскалился Кёя.\n- Так значит, спорим? - приподнялся Юу. - Но тогда и ты участвуешь.\n- Хе, даю тебе неделю! - Хибари убрал тонфа и протянул свою руку.\n- Договорились, - пожал руку тот. - И кто станет целью?\n- Хм... а тот, кто первый войдёт в этот кабинет! Чтоб уж честно было. В подтверждение победы принесу тебе нижнее бельё жертвы! - глава дисциплинарного комитета крепче сжал руку и, рванув на себя, перекинул Канду через спину на пол. - Но учти, если ты проиграешь, будешь драить университет весь год!\n- Тч... ладно - Юу поднялся, держась за спину. - Я тебе это не прощу.\nТут в дверь тихонько постучались.\n- А если выиграешь ты, я на год от тебя отстану! - хмыкнул Кёя и повернулся к двери.\n- Хибари-сан! Я, конечно, понимаю, что дисциплина - это святое, и поддерживаю ваше решение надрать этому придурку зад! Но у нас тут урок, а мне реферат сдавать! - зашёл безупречный Аллен Уолкер, в которого намертво вцепился Савада пытаясь остановить.\nСцена 4. Дубль 1.\nКанда посмотрел на мальчишку и издал тихий звук, похожий на кошачье шипение. Видимо он не ожидал, что первыми в класс зайдут именно эти двое.\n- Проходи, зашёл уже, - хмыкнул Юу и, отвесив Хибари подзатыльник, поспешил вернуться на своё место.\n- О, ты ещё живой?! Печально... - покачал головой Аллен. - Ребята заходите, Хибари ушёл! - тут же в дверь повалили остальные. И последним зашёл преподаватель. Беловолосый достал из сумки рисунки и чертежи, после чего развесил, взял указку и начал рассказывать реферат по экологии.\nВообще, он не был отличником, но большим трудягой!\nЕсли раньше Юу мечтал отсидеть последние уроки и свалить домой, то теперь его желанием было, чтобы уроки никогда не заканчивались.\n"Тч, Шпендель. Почему, почему ты так не вовремя свалился мне на голову?!" - думал он, делая вид, что слушает.\n- ... И вот поэтому для спасения китов так важно прекратить стрельбу и перевоз нефти через океан! У меня всё! - закончил рассказ.\n- Ну что ж, думаю, на 4-ку вполне хватит.\n- Что?! Но учитель, у него потрясающий доклад! - защебетал одногруппник.\n- Он много готовился, волновался, почему четыре?! - заступился за Аллена Тсуна.\n- Да прекрасный доклад, если честно, не ожидал, - высказался человек, которого меньше всего это могло волновать. Канда смотрел на преподавателя.\n- Э-Э-Э?! - ошалел весь класс.\n- А... э-эт-то... - залился румянцем Уолкер.\n- Ну ладно, 5!\nЮу, победно ухмыльнувшись, перевел взгляд на Аллена. Тот потупился и уставился в пол.\n"Хм, возможно это будет не так уж и ужасно", - почему-то только сейчас Кан',
'query: - Доброе утро, - шепот щекочет мне ухо. Совсем не хочется разлеплять глаза и встречать новый день. Поворачиваюсь, притягивая тебя ближе, и утыкаюсь носом тебе в грудь, ощущая запах сладостей, которые нравятся нам обоим. Я ежусь от холода, пытаясь вслепую найти края уютного одеяла и снова окунуться в сон. Ты замечаешь это и заботливо укрываешь меня. Твои пальцы перебирают мои волосы, а губы легко касаются моего лба. Мы так и застываем в этой позе на некоторое время.\nПроходит всего несколько минут, а потом я резко сажусь на кровати и начинаю ворчать, что уже давно пора вставать, ведь сегодня предстоит поездка на природу вместе с друзьями. У тебя на лице появляется улыбка, а руки тянут обратно, заставляя вновь откинуться на подушки. На улице льет дождь, барабаня в окна, а что еще делать в такой день, если не нежиться в уютной постели в объятиях любимого?\nСколько времени мы были знакомы, прежде, чем узнали о чувствах друг друга? Да, я не помню этого, но кто считает? Главное, что в моей памяти до сих пор бережно хранится момент, когда ты наконец услышал те важные слова. Перед глазами всплывают счастливые мгновения, словно кадры, запечатлевшие всё в мельчайших деталях. Это произошло в морозный январский день. Весёлая компания молодых людей не могла просто сидеть дома взаперти и упустить такой хороший случай для прогулки по заснеженному лесу и прочих зимних забав.\nТы тогда оказался вне нашего поля зрения, а темнота уже начала опускаться на землю. Конечно, мне ничего не оставалось, кроме как отправиться на поиски. На моем лице застыло удивление, когда я застал тебя за странным занятием: было забавно наблюдать за тобой, выводящим акварелью и баллончиками с краской некие узоры прямо на снегу. Твои необычность и непредсказуемость притягивали к себе мою натуру.\n- Ты мне нравишься. Очень, - кажется, будто всё замерло, и в звенящей тишине прозвучали простые слова, которые тяжело произнести. Что могло толкнуть меня просто взять и сказать их? Однако ответ на этот вопрос уже не важен, теперь он оставил место для беспокойства. Твои эмоции сложно прочитать. Так было всегда. Молчание нагнетает напряжение между нами.\nПрикосновение ледяных пальцев к моей щеке выводит из оцепенения, сковавшего тело. Я еле-еле различаю, что ты сейчас говоришь, но некоторые обрывки фраз всё же приобретают смысл. Никогда не верил в чудеса, да вот только сейчас понимаю: они случаются. Маленькое чудо - узнать об ответных чувствах того, кто так много значит для тебя.\nМы идем с тобой по заметенным снегом улицам. Вьюга, завывая, дует в лицо, сбивая прохожих с пути, а у меня на душе - спокойствие и умиротворение... Когда ты рядом, происходящее вокруг не имеет значения, и нет дела до всех остальных.\nМне слышно, как твои зубы стучат от холода. Сжавшись, ты прячешь нос в высокий ворот куртки. Я уверен, что твои руки в карманах давно не могут отогреться и принять нормальную температуру.\n- Замерз? - спрашиваю, заглядывая в карие глаза, обрамленные черными ресницами, на которые тихо падают снежинки, и, не дожидаясь ответа, тяну тебя в ближайшее кафе.\n- Пойдем домой, а то воспаление легких подхватишь, - строго замечаешь ты, уже направляясь в сторону нашего подъезда.\n- Постой, разве не видишь, какая чудесная погода? - знаешь ведь, что мне нравится гулять под дождем, подставляя лицо падающим холодным каплям.\nТебе в голову быстро приходит мысль, как заставить меня уйти в более сухое и теплое место. Долго не раздумывая, рывком притягиваешь к себе, прижимаясь к моим губам. От неожиданности я приоткрываю их, а руками начинаю гладить твою спину, к которой прилипла изрядно промокшая рубашка. Не спеша, ты углубляешь поцелуй, еще больше раззадоривая. Именно так и предполагалось, правда?\nКое-как справившись с замком, мы вваливаемся в полутемную квартиру, едва сумев устоять на ногах. Перед глазами до сих пор стоит пелена дождя. Ты сразу же резко прижимаешь меня к стене, и твой язык врывается в мой рот в неистовом поцелуе, беспорядочно двигается вдоль зубов и возвращается к моему языку. Я не стремлюсь брать инициативу на себя, мне всегда нравилось плавиться под натиском твоих ласк и ожидать, что же ты предпримешь дальше. У тебя почти всегда ледяные пальцы, и у меня мурашки бегут по коже от приятных, но холодных прикосновений. Тебе нравится смотреть, как прогибается моя спина, когда ты рисуешь на ней невидимые линии. В джинсах уже становится тесно, а в голове образуется пустота, заполняемая лишь тобой. Твои руки опускаются ниже, нащупывая пряжку ремня. О, ты же сам затеял эту игру, малыш, так давай поиграем?\nЯ, всё так же находясь в крепких объятиях, делаю неожиданный разворот, привычно занимая роль актива. Ты с замиранием сердца смотришь на меня, прекратив все действия. Улыбнувшись, провожу языком по твоему уху, чуть прикусывая мочку, от чего твои дрожащие пальцы перемещаются вверх и судорожно сжимают мои волосы, с которых стекает вода. Нетерпеливо расстегиваю пуговицы твоей рубашки, попутно оставляя несколько багровых отметин на шее и на груди. До меня доносится стон, и я продолжаю медленную пытку, стягивая с тебя брюки вместе с бельем. В тишине, разбавляемой нашим тяжелым дыханием, раздается шумный выдох, когда я делаю несколько движений рукой по основанию члена, а затем, лизнув головку, отстраняюсь, глядя в одурманенные глаза.\n- Спальня, - шепчешь ты, крепко держась за край тумбочки, стоявшей рядом.\nПросить дважды нет смысла, ведь у меня самого уже нет сил терпеть это тянущее ощущение, образовавшееся внизу живота.\nЛегко подхватываю тебя на руки и иду в ту комнату, в которой мы столько раз занимались любовью.\nКровать встречает нас знакомым скрипом, когда я опускаю тебя, нервно кусающего губы. Ты хватаешь меня и тянешь на себя, отчего оказываешься прижатым моим телом. Твои руки скользят по моим бокам, помогая снять футболку и, приложив некоторые усилия, приподнявшись, обводишь языком мои соски, слегка царапая их зубами.\nЧувствуя необходимость скорейшей разрядки, я пытаюсь как можно скорее снять джинсы и нашарить в ящичке шкафа смазку и презервативы. Нетерпеливо устраиваюсь поудобнее между твоих ног, разводя их в стороны и немного сгибая в коленях. Выдавливаю гель и поочередно аккуратно ввожу в тебя пальцы, растягивая проход. Слышу твое сдавленное шипение и стараюсь отвлечь своими ласками, покрывая грудь и плечи поцелуями, кое-где чуть прикусывая кожу.\nТы заерзал и недовольно уставился на меня, требуя большего. Я с удовольствием подчиняюсь. Приставляю член ко входу и медленно вхожу, на что получаю еле заметный кивок, как разрешение продолжать. Спустя несколько толчков ты выгибаешься в позвоночнике, и на моем лице появляется улыбка. Я увеличиваю темп, двигаясь всё быстрее.\nОргазм стремительно накрывает нас с головой, даря столь долгожданное наслаждение. Со сбившимся дыханием, со звездочками в глазах падаю рядом с тобой, раскрасневшимся, тяжело дышащим, но таким любимым. Ты прижимаешься ко мне, положив голову на мою грудь. Делать сейчас что-либо выше всяких сил - я продолжаю лежать, поглаживая твои волосы и вслушиваясь в биение наших сердец.\nПочему я тебя тогда не послушал? Зачем позволил тебе мокнуть под дождем вместе со мной? Если бы не эта ошибка, ты бы не подхватил серьезную болезнь. Меня до сих пор терзает чувство вины. Очень тяжело осознавать, что погубил чью-то жизнь... Особенно того, кто был центром моей Вселенной.\nЯ продолжаю жить прошлым, не могу не вспоминать те немногие, но такие дорогие моему сердцу моменты, проведенные с тобой. Мы совсем недолго были вместе. Меня часто можно встретить на том самом месте в лесу, где я открылся тебе. Иногда мне кажется, что сквозь сильную метель вижу твой силуэт. Ты улыбаешься и делаешь несколько шагов навстречу, а потом исчезаешь...\nЕсли бы только была возможность еще раз услышать такое теплое "доброе утро", почувствовать горячее дыхание, щекочущее ухо, хоть что-нибудь...\nПока ты был рядом, было совсем не важно, что происходит вокруг. Но теперь, когда я наблюдаю за ненастной погодой в окно, у меня нет светлых мыслей и легкости, что возникали раньше. Даже летом мое сердце сковывает лед, который уже не удастся растопить.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:-----------|
| cosine_accuracy | 0.9225 |
| cosine_accuracy_threshold | 0.7901 |
| cosine_f1 | 0.756 |
| cosine_f1_threshold | 0.7818 |
| cosine_precision | 0.7562 |
| cosine_recall | 0.7557 |
| **cosine_ap** | **0.8479** |
| cosine_mcc | 0.7072 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 276,686 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 445 tokens</li><li>mean: 510.97 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 454 tokens</li><li>mean: 511.61 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>query: Что что-то не так, интуиция подсказывала Занзасу с самого утра. Благополучно проигнорировав пробуждение с лигром в постели, Бестер периодически спал рядом с хозяином. Занзас лениво но, как известно организму не откажешь, спустился на кухню. Относительно спокойно поев, и избавившись от новоявленного трупа, который пролил на него подливу к мясу, босс всея варии отправился в душ. Быстро вымывшись и обвязав короткое полотенце на бедрах, он вернулся в свою спальню и прилег на кровать рядом с лигром. Немного потрепав его гриву, брюнет разлегся на кровати. Животное же вспомнило, как длинноволосый парень использовал его хозяина как самку. А Бестер по характеру был очень похож на Занзаса собственничеством, по этой причине зверь встал, потоптался на постели и забрался на своего хозяина. Занзас вновь не придал этому значения, принимая за попытку ленивого животного слезть с кровати. Это и было его ошибкой. Своим немалым весом Бестер придавил мужчину к постели, и отдельно придавил одной лапо...</code> | <code>query: Аомине неспешно шел в сторону школы Сейрин. Уроки еще шли, поэтому ему было некуда торопиться, а свои он благополучно прое...кхм, пропустил, дабы наведаться к Кагами. Зачем, он и сам до конца не понимал, но привык следовать своим желаниям и кормить внутренних демонов. В наушниках играла незатейливая мелодия на английском, а сам Аомине не заморачивался текстом, наслаждаясь звучанием музыки и голосом певца.<br>Войдя во двор, он огляделся, ища вход в учебный корпус. Найдя же нужную дверь, он прошел внутрь, подходя к расписанию. Проведя пальцем по цифре нужного класса, он взглянул на сами уроки.<br>-Хм, японский... Не думаю, что он будет против, если я его отмажу. - И, по акульи улыбнувшись, парень направился на второй этаж, к кабинету номер тринадцать. Предварительно заглянув в замочную скважину, увидев молоденькую учительницу и выглядывающую из-под блузки татуировку "I love yaoi", в очередной раз оскалился и прошел в кабинет. Девушка не успела даже пикнуть, как он был у парты Кагами. На...</code> | <code>1</code> |
| <code>query: Что что-то не так, интуиция подсказывала Занзасу с самого утра. Благополучно проигнорировав пробуждение с лигром в постели, Бестер периодически спал рядом с хозяином. Занзас лениво но, как известно организму не откажешь, спустился на кухню. Относительно спокойно поев, и избавившись от новоявленного трупа, который пролил на него подливу к мясу, босс всея варии отправился в душ. Быстро вымывшись и обвязав короткое полотенце на бедрах, он вернулся в свою спальню и прилег на кровать рядом с лигром. Немного потрепав его гриву, брюнет разлегся на кровати. Животное же вспомнило, как длинноволосый парень использовал его хозяина как самку. А Бестер по характеру был очень похож на Занзаса собственничеством, по этой причине зверь встал, потоптался на постели и забрался на своего хозяина. Занзас вновь не придал этому значения, принимая за попытку ленивого животного слезть с кровати. Это и было его ошибкой. Своим немалым весом Бестер придавил мужчину к постели, и отдельно придавил одной лапо...</code> | <code>query: Аомине был ангелом уже очень давно. Он даже не помнил сколько лет, даже веков прошло с того момента. Он любил сидя на одном из облаков наблюдать за Землей, а особенно осенью. И это был обычный день, но Аомине захотелось посмотреть поближе. Раскрыв огромные крылья, он устремился камнем вниз. Для людей это выглядело как упавшая звезда, яркая линия в небе. Никто не знал, что все эти линии чертились падающими ангелами, лишь поэтому желания, которые было принято загадывать, исполнялись. На большой скорости молодой ангел приземлился. Когда облако пыли рассеялось, стало видно, что он стоит на одном колене, упираясь руками в землю. Сложив свои крылья, он скрыл их от человеческих глаз, его одежда всегда была как земная, и сейчас тоже. Белая борцовка, расстегнутая бледно голубая рубашка без рукавов и темно синие джинсы. Заинтересованно смотря по сторонам, он побрел в сторону города, мимо ехали машины, но он их не замечал. Зайдя в город, он был почти ослеплен неоновыми вывесками и многочис...</code> | <code>1</code> |
| <code>query: Что что-то не так, интуиция подсказывала Занзасу с самого утра. Благополучно проигнорировав пробуждение с лигром в постели, Бестер периодически спал рядом с хозяином. Занзас лениво но, как известно организму не откажешь, спустился на кухню. Относительно спокойно поев, и избавившись от новоявленного трупа, который пролил на него подливу к мясу, босс всея варии отправился в душ. Быстро вымывшись и обвязав короткое полотенце на бедрах, он вернулся в свою спальню и прилег на кровать рядом с лигром. Немного потрепав его гриву, брюнет разлегся на кровати. Животное же вспомнило, как длинноволосый парень использовал его хозяина как самку. А Бестер по характеру был очень похож на Занзаса собственничеством, по этой причине зверь встал, потоптался на постели и забрался на своего хозяина. Занзас вновь не придал этому значения, принимая за попытку ленивого животного слезть с кровати. Это и было его ошибкой. Своим немалым весом Бестер придавил мужчину к постели, и отдельно придавил одной лапо...</code> | <code>query: Тсунаеши лежал на постели в одном шелковом халате, ожидая прихода своего парня. Когда дверь отрылась, он развратно развел ножки и принялся вылизывать свои пальцы.<br>-Д-д-джудайме! Что Вы делаете?!<br>-Мммм, Хааааято, я тебя так хочу! - Изнывая от желания, произнес Савада.<br>-Да Вы что... Как я могу? - Тот покраснел и отвернулся.<br>-Знаешь что, Гокудера... - Голос стал сердитым. - Мы расстаемся! - И не дав подрывнику и слова сказать, Савада завязал пояс халата и, обувшись, покинул квартиру. Он быстро добрался до своей машины и, сев в нее, не менее быстро добрался до съемной квартиры Скуало. Потарабанив в дверь всего минуту, он ворвался в коридор и на недоуменный взгляд Скуало скинул халатик.<br>-Врооой, Савада, ты че творишь? - Суперби пытался собрать челюсть с пола, а сам Десятый оперся руками на стену, прогнул спинку и, расставив ножки потребовал.<br>-Трахни меня!<br>-Чего? Ты пьяный что ли? - Челюсть мечника во второй раз познакомилась с полом.<br>-Просто вставь. Мне можно без смазки и гандонов. -...</code> | <code>1</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 100,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 184,428 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 432 tokens</li><li>mean: 510.2 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 409 tokens</li><li>mean: 510.42 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>query: Дело было вечером, когда я отправлялась в гости к Мартине. Я взяла с собой Факундо, и Ксаби. Это мои сумасшедшие, но лучшие друзья. Короче коротко: Я - Лодовика, но можно просто Лодо.<br>Так как Тина живёт на 9 этаже, нам пришлось ехать на лифте, иначе на лестнице мы бы подохли. Короче заходим в лифт, и вот мы уже на нужном нам этаже.<br>Дверки открылись, я отвернулась на минутку. А потом повернулась, смотрю а этих идиотов нету. Вдруг дверки закрылись, я девушка не пугливая но, чёрт а вдруг я тут застряну?<br>- Придурки, блин! Откройте лифт! Застряну ведь!<br>В ответ я услышала лишь смех двух парней. Ну! Им не поздоровится когда я отсюда выйду.<br>Вдруг, дверки открылись, я вылезла из лифта, и эти дебилы насильно затаскивают меня в лифт, и жмут на кнопку, чтобы лифт поехал до 1 этажа.<br>- Быстрее! Факу! Надо быстрее Лодо спуститься на 1 этаж! - Закричал Ксабьяни.<br>- Понял! - В ответ крикнул Факундо.<br>Тут дверки закрылись, и меня понесло на 1 этаж. Через несколько минут я спустилась на нужный этаж,...</code> | <code>query: Я - Иккинг. Сын вождя, первый который приручил дракона.<br>У меня есть любимая девушка, но я давно её не видел. Последние новости меня привели в ужас.<br>Громгильда, дракониха, погибла. А Астрид не перенесла её смерти, и повесилась...<br>Мое сердце разбилось на сто осколков, моя единственная любовь погибла.<br>Говорят что викинги бессердечные, но это не так. Мы тоже умеем любить!<br>Раннее утро. Все викинги ещё спят, за окном холодно, солнце, но ветер всё же есть.<br>Я приоткрыл глаза, и заметил, что моя любимая рептилия всё ещё спит.<br>Так холодно, что мне захотелось всю вечность пролежать в тёплой постельке. Моя кровать так и манила лечь, и заставить спать. Но, тут мой чёрный друг проснулся. Он уставился на меня своими большими зелёными глазами.<br>- Что? - Не понимал я что происходит, но на мой вопрос Беззубик лишь фыркнул.<br>Но тут он расправил свои крылья, и подлетел ко мне. А свою морду положил мне на руки. Я явно не понимал что он от меня хочет.<br>Но тут он своей мордой уставился на свои крылья, и ...</code> | <code>1</code> |
| <code>query: Дело было вечером, когда я отправлялась в гости к Мартине. Я взяла с собой Факундо, и Ксаби. Это мои сумасшедшие, но лучшие друзья. Короче коротко: Я - Лодовика, но можно просто Лодо.<br>Так как Тина живёт на 9 этаже, нам пришлось ехать на лифте, иначе на лестнице мы бы подохли. Короче заходим в лифт, и вот мы уже на нужном нам этаже.<br>Дверки открылись, я отвернулась на минутку. А потом повернулась, смотрю а этих идиотов нету. Вдруг дверки закрылись, я девушка не пугливая но, чёрт а вдруг я тут застряну?<br>- Придурки, блин! Откройте лифт! Застряну ведь!<br>В ответ я услышала лишь смех двух парней. Ну! Им не поздоровится когда я отсюда выйду.<br>Вдруг, дверки открылись, я вылезла из лифта, и эти дебилы насильно затаскивают меня в лифт, и жмут на кнопку, чтобы лифт поехал до 1 этажа.<br>- Быстрее! Факу! Надо быстрее Лодо спуститься на 1 этаж! - Закричал Ксабьяни.<br>- Понял! - В ответ крикнул Факундо.<br>Тут дверки закрылись, и меня понесло на 1 этаж. Через несколько минут я спустилась на нужный этаж,...</code> | <code>query: Виолетта как всегда спала в своей кровати, и, в очередной раз ей снился кошмар. В очередной раз ей снилась ее покойная мать, Мария. Виолетта встала, вся вспотевшая, вся испуганная.<br>Вдруг дверь комнаты открылась, из за двери показался юноша. Он глядя на Виолетту нахмурил брови, и подошёл к ней.<br>- Виолетта, что с тобой? - Спросил он.<br>- Ничего. Просто опять кошмар приснился.<br>- Опять?<br>Федерико сел на край кровати, и обнял ее. Та не стала сопротивляться. Она обняла его в ответ, сейчас ей нужна поддержка. Опять сон, опять слёзы. Когда же бедной девушке прекратится сниться ее мать?<br>Виолетта встала из своей постели, и Федерико вышел из комнаты. Девушка начала одеваться, одевшись она спустилась на первый этаж, в гостиную.<br>Заметив что никого кроме Федерико в гостиной нету, она просила:<br>- А где все?<br>- Ольга пошла покупать продукты, а Ромальо и Герман на работе.<br>- Понятно.<br>Всё как всегда, ничего не меняется, кроме моих кошмаров.<br>Я села на диван, напротив Федерико, он что то писал на бумажке...</code> | <code>1</code> |
| <code>query: Дело было вечером, когда я отправлялась в гости к Мартине. Я взяла с собой Факундо, и Ксаби. Это мои сумасшедшие, но лучшие друзья. Короче коротко: Я - Лодовика, но можно просто Лодо.<br>Так как Тина живёт на 9 этаже, нам пришлось ехать на лифте, иначе на лестнице мы бы подохли. Короче заходим в лифт, и вот мы уже на нужном нам этаже.<br>Дверки открылись, я отвернулась на минутку. А потом повернулась, смотрю а этих идиотов нету. Вдруг дверки закрылись, я девушка не пугливая но, чёрт а вдруг я тут застряну?<br>- Придурки, блин! Откройте лифт! Застряну ведь!<br>В ответ я услышала лишь смех двух парней. Ну! Им не поздоровится когда я отсюда выйду.<br>Вдруг, дверки открылись, я вылезла из лифта, и эти дебилы насильно затаскивают меня в лифт, и жмут на кнопку, чтобы лифт поехал до 1 этажа.<br>- Быстрее! Факу! Надо быстрее Лодо спуститься на 1 этаж! - Закричал Ксабьяни.<br>- Понял! - В ответ крикнул Факундо.<br>Тут дверки закрылись, и меня понесло на 1 этаж. Через несколько минут я спустилась на нужный этаж,...</code> | <code>query: Я - Джамиля, дочь знатного графа. Моя мать умерла при родах, а я осталась жива. Уже как 20 лет прошло со смерти любящий матери. Мой отец снова женился чтобы у меня был пример для подражания.<br>Мою мачеху зовут Элизабет. Вроде имя доброе, а сама женщина не из лучших.<br>Мы с Элизабет не ладили. Мой отец уехал, дома осталась я с мачехой, которая совсем не занималась моим воспитанием как поручил ей мой отец.<br>Дом у нас был богатый, красивый. И много слуг.<br>Попутный ветер дует мне прямо в лицо. В округе посажены цветы.<br>Сейчас я в саду. Я очень редко улыбаюсь, так как таких радостных моментов, у меня было очень мало.<br>Я редко выхожу из своего дома, даже практически из своей комнаты не выхожу.<br>Моя мачеха очень редко выпускает меня подышать воздухом, она говорит что мне нельзя выходить на улицу, и общаться с людьми, пока я не научусь правилами этикета.<br>Немного подышав воздухом я зашла в дом. Ко мне сразу же подбежала Элизабет.<br>Глаза её были наполнены гневом. Она прожигала меня своим зловещим в...</code> | <code>1</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 100,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_ap |
|:------:|:-----:|:-------------:|:---------------:|:---------:|
| 1.0176 | 4400 | 1.4186 | - | - |
| 1.0407 | 4500 | 1.4075 | - | - |
| 1.0638 | 4600 | 1.3934 | - | - |
| 1.0870 | 4700 | 1.3799 | - | - |
| 1.1101 | 4800 | 1.3597 | - | - |
| 1.1332 | 4900 | 1.3351 | - | - |
| 1.1563 | 5000 | 1.3082 | - | - |
| 1.1795 | 5100 | 1.3105 | - | - |
| 1.2026 | 5200 | 1.2948 | - | - |
| 1.2257 | 5300 | 1.3486 | - | - |
| 1.2488 | 5400 | 1.3155 | - | - |
| 1.2720 | 5500 | 1.2761 | - | - |
| 1.2951 | 5600 | 1.2541 | - | - |
| 1.3182 | 5700 | 1.2346 | - | - |
| 1.3414 | 5800 | 1.2285 | - | - |
| 1.3645 | 5900 | 1.2013 | - | - |
| 1.3876 | 6000 | 1.1986 | - | - |
| 1.4107 | 6100 | 1.1755 | - | - |
| 1.4339 | 6200 | 1.1937 | - | - |
| 1.4570 | 6300 | 1.202 | - | - |
| 1.4801 | 6400 | 1.1607 | - | - |
| 1.5032 | 6500 | 1.2116 | - | - |
| 1.5264 | 6600 | 1.1797 | - | - |
| 1.5495 | 6700 | 1.1571 | - | - |
| 1.5726 | 6800 | 1.1526 | - | - |
| 1.5957 | 6900 | 1.1438 | - | - |
| 1.6189 | 7000 | 1.1634 | - | - |
| 1.6420 | 7100 | 1.1367 | - | - |
| 1.6651 | 7200 | 1.1133 | - | - |
| 1.6883 | 7300 | 1.1156 | - | - |
| 1.7114 | 7400 | 1.1102 | - | - |
| 1.7345 | 7500 | 1.1123 | - | - |
| 1.7576 | 7600 | 1.1066 | - | - |
| 1.7808 | 7700 | 1.1291 | - | - |
| 1.8039 | 7800 | 1.1094 | - | - |
| 1.8270 | 7900 | 1.094 | - | - |
| 1.8501 | 8000 | 1.1585 | - | - |
| 1.8733 | 8100 | 1.077 | - | - |
| 1.8964 | 8200 | 1.108 | - | - |
| 1.9195 | 8300 | 1.1431 | - | - |
| 1.9426 | 8400 | 1.0784 | - | - |
| 1.9658 | 8500 | 1.0834 | - | - |
| 1.9889 | 8600 | 1.1268 | - | - |
| 2.0 | 8648 | - | 9.6992 | 0.8450 |
| 2.0120 | 8700 | 1.0443 | - | - |
| 2.0352 | 8800 | 0.9715 | - | - |
| 2.0583 | 8900 | 0.957 | - | - |
| 2.0814 | 9000 | 0.9784 | - | - |
| 2.1045 | 9100 | 0.9581 | - | - |
| 2.1277 | 9200 | 0.9569 | - | - |
| 2.1508 | 9300 | 0.9518 | - | - |
| 2.1739 | 9400 | 0.9485 | - | - |
| 2.1970 | 9500 | 0.9433 | - | - |
| 2.2202 | 9600 | 0.9392 | - | - |
| 2.2433 | 9700 | 0.9248 | - | - |
| 2.2664 | 9800 | 0.9105 | - | - |
| 2.2895 | 9900 | 0.9769 | - | - |
| 2.3127 | 10000 | 0.9502 | - | - |
| 2.3358 | 10100 | 0.9604 | - | - |
| 2.3589 | 10200 | 0.9291 | - | - |
| 2.3821 | 10300 | 0.9552 | - | - |
| 2.4052 | 10400 | 0.9621 | - | - |
| 2.4283 | 10500 | 0.9357 | - | - |
| 2.4514 | 10600 | 0.9323 | - | - |
| 2.4746 | 10700 | 0.9327 | - | - |
| 2.4977 | 10800 | 0.9067 | - | - |
| 2.5208 | 10900 | 0.9411 | - | - |
| 2.5439 | 11000 | 0.9305 | - | - |
| 2.5671 | 11100 | 0.9378 | - | - |
| 2.5902 | 11200 | 0.9171 | - | - |
| 2.6133 | 11300 | 0.9074 | - | - |
| 2.6364 | 11400 | 0.9262 | - | - |
| 2.6596 | 11500 | 0.9063 | - | - |
| 2.6827 | 11600 | 0.8814 | - | - |
| 2.7058 | 11700 | 0.9089 | - | - |
| 2.7290 | 11800 | 0.9048 | - | - |
| 2.7521 | 11900 | 0.9268 | - | - |
| 2.7752 | 12000 | 0.8913 | - | - |
| 2.7983 | 12100 | 0.9064 | - | - |
| 2.8215 | 12200 | 0.8585 | - | - |
| 2.8446 | 12300 | 0.878 | - | - |
| 2.8677 | 12400 | 0.8612 | - | - |
| 2.8908 | 12500 | 0.8799 | - | - |
| 2.9140 | 12600 | 0.8541 | - | - |
| 2.9371 | 12700 | 0.8521 | - | - |
| 2.9602 | 12800 | 0.8582 | - | - |
| 2.9833 | 12900 | 0.869 | - | - |
| 3.0 | 12972 | - | 10.4115 | 0.8479 |
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.7.1+cu128
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
KoichiYasuoka/modernbert-base-classical-chinese-ud-square
|
KoichiYasuoka
| 2025-06-19T10:05:04Z | 0 | 0 | null |
[
"pytorch",
"modernbert",
"classical chinese",
"literary chinese",
"ancient chinese",
"token-classification",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/modernbert-base-classical-chinese",
"base_model:finetune:KoichiYasuoka/modernbert-base-classical-chinese",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-06-19T10:03:44Z |
---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/modernbert-base-classical-chinese
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "孟子見梁惠王"
---
# modernbert-base-classical-chinese-ud-square
## Model Description
This is a ModernBERT model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [modernbert-base-classical-chinese](https://huggingface.co/KoichiYasuoka/modernbert-base-classical-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/modernbert-base-classical-chinese-ud-square",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("孟子見梁惠王"))
```
|
sgonzalezygil/sd-finetuning-dreambooth-v17-1500
|
sgonzalezygil
| 2025-06-19T10:03:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T10:01:39Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/splade-cocondenser-ensembledistil-nli
|
tomaarsen
| 2025-06-19T09:58:44Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:10000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:naver/splade-cocondenser-ensembledistil",
"base_model:finetune:naver/splade-cocondenser-ensembledistil",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-19T09:58:31Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:10000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: naver/splade-cocondenser-ensembledistil
widget:
- text: Two kids at a ballgame wash their hands.
- text: Two dogs near a lake, while a person rides by on a horse.
- text: This mother and her daughter and granddaughter are having car trouble, and
the poor little girl looks hot out in the heat.
- text: A young man competes in the Olympics in the pole vaulting competition.
- text: A man is playing with the brass pots
datasets:
- sentence-transformers/all-nli
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- active_dims
- sparsity_ratio
co2_eq_emissions:
emissions: 2.9668555526185707
energy_consumed: 0.007632725204960537
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.033
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8541311579868741
name: Pearson Cosine
- type: spearman_cosine
value: 0.8470008029984434
name: Spearman Cosine
- type: active_dims
value: 99.30233383178711
name: Active Dims
- type: sparsity_ratio
value: 0.9967465325394211
name: Sparsity Ratio
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.8223074543214202
name: Pearson Cosine
- type: spearman_cosine
value: 0.8065254878130631
name: Spearman Cosine
- type: active_dims
value: 95.75453186035156
name: Active Dims
- type: sparsity_ratio
value: 0.9968627700720676
name: Sparsity Ratio
---
# splade-cocondenser-ensembledistil trained on Natural Language Inference (NLI)
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil) <!-- at revision 25178a62708a3ab1b5c4b5eb30764d65bfddcfbb -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 30522 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-cocondenser-ensembledistil-nli")
# Run inference
sentences = [
'A man is sitting in on the side of the street with brass pots.',
'A man is playing with the brass pots',
'A group of adults are swimming at the beach.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[16.8617, 12.9505, 0.2749],
# [12.9505, 20.8479, 0.2440],
# [ 0.2749, 0.2440, 18.7043]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>SparseEmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseEmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:----------|:-----------|
| pearson_cosine | 0.8541 | 0.8223 |
| **spearman_cosine** | **0.847** | **0.8065** |
| active_dims | 99.3023 | 95.7545 |
| sparsity_ratio | 0.9967 | 0.9969 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.38 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.7 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>0.5</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>0.0</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>1.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')",
"lambda_corpus": 0.003
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 1,000 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.44 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:-----------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>0.5</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>1.0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>0.0</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1, similarity_fct='dot_score')",
"lambda_corpus": 0.003
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 4e-06
- `num_train_epochs`: 1
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:--------:|:-------:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.8366 | - |
| 0.032 | 20 | 0.8107 | - | - | - |
| 0.064 | 40 | 0.7854 | - | - | - |
| 0.096 | 60 | 0.7015 | - | - | - |
| 0.128 | 80 | 0.7161 | - | - | - |
| 0.16 | 100 | 0.724 | - | - | - |
| 0.192 | 120 | 0.6883 | 0.7255 | 0.8454 | - |
| 0.224 | 140 | 0.6661 | - | - | - |
| 0.256 | 160 | 0.6786 | - | - | - |
| 0.288 | 180 | 0.679 | - | - | - |
| 0.32 | 200 | 0.8013 | - | - | - |
| 0.352 | 220 | 0.6781 | - | - | - |
| 0.384 | 240 | 0.667 | 0.6779 | 0.8465 | - |
| 0.416 | 260 | 0.6691 | - | - | - |
| 0.448 | 280 | 0.7376 | - | - | - |
| 0.48 | 300 | 0.5601 | - | - | - |
| 0.512 | 320 | 0.6425 | - | - | - |
| 0.544 | 340 | 0.7406 | - | - | - |
| 0.576 | 360 | 0.6033 | 0.6623 | 0.8469 | - |
| 0.608 | 380 | 0.8166 | - | - | - |
| 0.64 | 400 | 0.5303 | - | - | - |
| 0.672 | 420 | 0.614 | - | - | - |
| 0.704 | 440 | 0.6253 | - | - | - |
| 0.736 | 460 | 0.5467 | - | - | - |
| 0.768 | 480 | 0.6804 | 0.6531 | 0.8470 | - |
| 0.8 | 500 | 0.6765 | - | - | - |
| 0.832 | 520 | 0.6522 | - | - | - |
| 0.864 | 540 | 0.5845 | - | - | - |
| 0.896 | 560 | 0.6786 | - | - | - |
| 0.928 | 580 | 0.5232 | - | - | - |
| **0.96** | **600** | **0.6077** | **0.6516** | **0.847** | **-** |
| 0.992 | 620 | 0.619 | - | - | - |
| -1 | -1 | - | - | - | 0.8065 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.008 kWh
- **Carbon Emitted**: 0.003 kg of CO2
- **Hours Used**: 0.033 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
yinita/cpdc_Qwen3-8B_grpo-0617_1318-onlytoolcall_step_100
|
yinita
| 2025-06-19T09:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:55:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Daria-best/stella_en_400M_v5_neurlips_papers_fine-tuned
|
Daria-best
| 2025-06-19T09:52:31Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:14255",
"loss:CachedMultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:NovaSearch/stella_en_400M_v5",
"base_model:finetune:NovaSearch/stella_en_400M_v5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-19T09:37:33Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:14255
- loss:CachedMultipleNegativesRankingLoss
base_model: NovaSearch/stella_en_400M_v5
widget:
- source_sentence: Classifier reduction techniques for improving prediction accuracy
sentences:
- 'INTRODUCTION While neural networks have proved a good tool for processing static
patterns, classi fying sequential information has remained a challenging task.
The problem involves recognizing patterns in a time series of vectors, which requires
forming a good inter nal representation for the sequences. Several researchers
have proposed extending the self-organizing feature map (Kohonen 1989, 1990),
a highly successful static pattern classification method, to sequential information
(Kangas 1991; Samara bandu and Jakubowicz 1990; Scholtes 1991). Below, three of
the most recent of these networks are briefly described. The remainder of the
paper focuses on a new architecture designed to overcome the shortcomings of these
approaches. 578 Daniel L. James, Risto Miikkulainen Recently, Chappel and Taylor
(1993) proposed the Temporal Kohonen Map (TKM) architecture for classifying sequences.
The TKM keeps track of the activation his tory of each node by updating a value
called leaky integrator potential, inspired by the membrane potential in biological
neural systems. The activity of a node depends both on the current input vector
and the previous input vectors, represented by the node''s potential. A given
sequence is processed by mapping one vector at a time, and the last winning node
serves to represent the entire sequence. This way, there needs to be a separate
node for every possible sequence, which is a disadvantage when the number of sequences
to be classified is large. The TKM also suffers from loss of context. Which node
wins depends almost entirely upon the most recent input vectors. For example,
the string baaaa would most likely map to the same node as aaaaa, making the approach
applicable only to short sequences. The SOFM-S network proposed by van Harmelen
(1993) extends TKM such that the activity of each map node depends on the current
input vector and the past activation of all map nodes. The SOFM-S is an improvement
of TKM in that con textual information is not lost as quickly, but it still uses
a single node to represent a sequence. The TRACE feature map (Zandhuis 1992) has
two feature map layers. The first layer is a topological map of the individual
input vectors, and is used to generate a trace (i.e. path) of the input sequence
on the map . The second layer then maps the trace pattern to a single node. In
TRACE, the sequences are represented by distributed patterns on the first layer,
potentially allowing for larger capacity, but it is difficult to encode sequences
where the same vectors repeat, such as baaaa. All a-vectors would be mapped on
the same unit in the first layer, and any number of a-vectors would be indistinguishable.
The architecture described in this paper, SARDNET (Sequential Activation Re tention
and Decay NETwork), also uses a subset of map nodes to represent the sequence
of vectors. Such a distributed approach allows a large number of repre sentations
be "packed" into a small map-like sardines. In the following sections, we will
examine how SARDNET differs from conventional self-organizing maps and how it
can be used to represent and classify a large number of complex sequences. 2 THE
SARDNET ARCHITECTURE Input to SARDNET consists of a sequence of n-dimensional
vectors S V I, V 2 , V 3 , ... , VI (figure 1). The components of each vector
are real values in the interval [0,1]. For example, each vector might represent
a sample of a speech signal in n different frequencies, and the entire sequence
might constitute a spoken word. The SARDNET input layer consists of n nodes, one
for each component in the input vector, and their values are denoted as A (aI,
a2, a3, ... , an). The map consists of m x m nodes with activation Ojk , 1 j,
k m. Each node has an n-dimensional input weight vector Wjk, which determines
the node''s response to the input activation. In a conventional feature map network
as well as in SARDNET, each input vector is mapped on a particular unit on the
map, called the winner or the maximally responding unit. In SARDNET, however,
once a node wins an input, it is made SARDNET: A Self-Organizing Feature Map for
Sequences 579 Sequence of Input vectors S Previous winners Input weight vector
wJk.l Winning unit jlc Figure 1: The SARDNET architecture. A sequence of input
vectors activates units on the map one at a time. The past winners are excluded
from further competition, and their activation is decayed gradually to indicate
position in the sequence. INITIALIZATION: Clear all map nodes to zero. MAIN LOOP:
While not end of seihence 1. Find unactivated weight vector t at best matches
the input. 2. Assign 1.0 activation to that unit. 3. Adjust weight vectors of
the nodes in the neighborhood. 4. Exclude the winning unit from subseent competition.
S. Decrement activation values for all ot er active nodes. RESULT: Sequence representation activated
nodes ordered by activation values Table 1: The SARDNET training algorithm. uneligible
to respond to the subsequent inputs in the sequence. This way a different map
node is allocated for every vector in the sequence. As more vectors come in, the
activation of the previous winners decays. In other words, each sequence of length
1 is represented by 1 active nodes on the map, with their activity indicating
the order in which they were activated. The algorithm is summarized in table 1.
Assume the maximum length ofthe sequences we wish to classify is I, and each input
vector component can take on p possible values. Since there are pn possible input
vectors, Ipn map nodes are needed to represent all possible vectors in all possible
positions in the sequence, and a distributed pattern over the Ipn nodes can be
used to represent all pnl different sequences. This approach offers a significant
advantage over methods in which pnl nodes would be required for pnl sequences.
The specific computations of the SARDNET algorithm are as follows: The winning
node (j, k) in each iteration is determined by the Euclidean distance Djk of the
580 Daniel L. James, Risto Miikkulainen input vector A and the node ''s weight
vector W jk: The unit with the smallest distance is selected as the winner and
activated with 1.0. The weights of this node and all nodes in its neighborhood
are changed according to the standard feature map adaptation rule: where a denotes
the learning rate. As usual, the neighborhood starts out large and is gradually
decreased as the map becomes more ordered. As the last step in processing an input
vector, the activation 7]jk of all active units in the map are decayed proportional
to the decay parameter d: As in the standard feature map , as the weight vectors
adapt, input vectors gradually become encoded in the weight vectors of the winning
units. Because weights are changed in local neighborhoods, neighboring weight
vectors are forced to becom e as similar as possible, and eventually the network
forms a topological layout of the input vector space. In SARDNET, however, if
an input vector occurs multiple times in the same input sequence, it will be represented
multiple times on the map as well. In other words, the map representation expands
those areas of the input space that are visited most often during an input sequence.
3 EXPERIMENTS SARDNET has proven successful in learning and recognizing arbitrary
sequences of binary and real numbers , as well as sequences of phonemic representations
for English words. This section presents experiments on mapping three-syllable
words. This data was selected because it shows how SARDNET can be applied to complex
input derived from a real-world task. 3.1 INPUT DATA The phonemic word representations
were obtained from the CELEX database of the Max Planck Institute for Psycholinguistics
and converted into International Pho netic Alphabet (IPA)-compliant representation,
which better describes similarities among the phonemes. The words vary from five
to twelve phonemes in length. Each phoneme is represented by five values: place,
manner, sound, chromacity and sonor ity. For example , the consonant p is represented
by a single vector (bilabial, stop, unvoiced, nil, nil), or in terms of real numbers,
(.125, .167, .750,0,0). The diph thong sound ai as in "buy" , is represented by
the two vectors (nil, vowel, voiced, front, low) and (nil, vowel , voiced, front-center,
hi-mid), or in real numbers , There are a total of 43 phonemes in this data set,
including 23 consonants and 20 vowels. To represent all phonemic sequences of
length 12, TKM and SOFM-S would SARDNET: A Self-Organizing Feature Map for Sequences
581 Figure 2: Accuracy of SARDNET for different map and data set sizes. The accuracy
is measured as a percentage of unique representations out of all word sequences.
need to have 4512 6.919 map nodes, whereas SARDNET would need only 45 x 12 540
nodes. Of course, only a very small subset of the possible sequences actually
occur in the data. Three data sets consisting of 713,988, and 1628 words were
used in the experiments. If the maximum number of occurrences of phoneme i in
any single sequence is Cj I then the number of nodes SARDNET needs is C L:o Cj
I where N is the number of phonemes . This number of nodes will allow SARDNET
to map each phoneme in each sequence to a unit with an exact representation of
that phoneme in its weights. Calculated this way, SARDNET should scale up very
well with the number of words: it would need 81 nodes for representing the 713
3.2 DENSENESS AND ACCURACY A series of experiments with the above three data sets
and maps of 16 to 81 nodes were run to see how accurately SARDNET can represent
the sequences. Self-organization was quite fast: each simulation took only about
10 epochs, with a 0.45 and the neighborhood radius decreasing gradually from
5-1 to zero. Fig ure 2 shows the percentage of unique representations for each
data set and map SARDNET shows remarkable representational power: accuracy for
all sets is better than 97.7, and SARDNET manages to pack 1592 unique representations
even on the smallest 16-node map. Even when there are not enough units to represent
each phoneme in each sequence exactly, the map is sometimes able to "reuse" units
to represent multiple similar phonemes . For example, assume units with exact
representations for the phonemes a and b exist somewhere on the map, and the input
data does not contain pairs of sequences such as aba-abb, in which it is crucial
to distinguished the second a from the second b. In this case, the second occurrence
of both phonemes could be represented by the same unit with a weight vector that
is the average of a and b. This is exactly what the map is doing: it is finding
the most descriptive representation of the data, given the available resources.
582 Daniel L. James, Risto Miikkulainen Note that it would be possible to determine
the needed C L:f:o Cj phoneme representation vectors directly from the input
data set, and without any learning or a map structure at all, establish distributed
representations on these vectors with the SARDNET algorithm. However, feature
map learning is necessary ifthe number of available representation vectors is
less than C. The topological organization of the map allows finding a good set
of reusable vectors that can stand for different phonemes in different sequences,
making the representation more efficient. 3.3 REPRESENTING SIMILARITY Not only
are the representations densely packed on the map, they are also descriptive in
the sense that similar sequences have similar representations. Figure 3 shows
the final activation patterns on the 36-unit, 713-word map for six example words.
The first two words, "misplacement" and "displacement," sound very similar, and
are represented by very similar patterns on the map. Because there is only one
m in "displacement" , it is mapped on the same unit as the initial m of "misplacement."
Note that the two IDS are mapped next to each other, indicating that the map is
indeed topological, and small changes in the input cause only small changes in
the map representation. Note also how the units in this small map are reused to
represent several different phonemes in different contexts. The other examples
in figure 3 display different types of similarities with "mis placement". The
third word, "miscarried", also begins with "mis", and shares that subpart of the
representation exactly. Similarly, "repayment" shares a similar tail and "pessimist"
the subsequence "mis" in a different part or the word. Because they appear in
a different context, these subsequences are mapped on slightly different units,
but still very close to their positions with "misplacement." The last word, "burundi"
sounds very different, as its representation on the map indicates. Such descriptive
representations are important when the map has to represent in formation that
is incomplete or corrupted with noise. Small changes in the input sequence cause
small changes in the pattern, and the sequence can still be recog nized. This
property should turn out extremely important in real-world applications of SARDNET,
as well as in cognitive science models where confusing similar pat terns with
each other is often plausible behavior. 4 DISCUSSION AND FUTURE RESEARCH Because
the sequence representations on the map are distributed, the number of possible
sequences that can be represented in m units is exponential in m, instead of linear
as in most previous sequential feature map architectures. This denseness together
with the tendency to map similar sequences to similar representations should turn
out useful in real-world applications, which often require scale-up to large and
noisy data sets. For example, SARDNET could form the core of an isolated word
recognition system. The word input would be encoded in duration normalized sequences
of sound samples such as a string of phonemes, or perhaps representations of salient
transitions in the speech signal. It might also be possible to modify SARDNET
to form a more continuous trajectory on the map so that SARDNET itself would take
care of variability in word duration. For example, a SARDNEf : A Self-Organizing
Feature Map for Sequences 583 Figure 3: Example map representations. sequence
of redundant inputs could be reduced to a single node if all these inputs fall
within the same neighborhood. Even though the sequence representations are dense,
they are also descriptive. Cat egory memberships are measured not by labels of
the maximally responding units, but by the differences in the response patterns
themselves. This sort of distributed representation should be useful in cognitive
systems where sequential input must be mapped to an internal static representation
for later retrieval and manipula tion. Similarity-based reasoning on sequences
should be easy to implement, and the sequence can be easily recreated from the
activity pattern on the map. Given part of a sequence, SARDNET may also be modified
to predict the rest of the sequence. This can be done by adding lateral connections
between the nodes in the map layer. The lateral connections between successive
winners would be strengthened during training. Thus, given part of a sequence,
one could follow the strongest lateral connections to complete the sequence. 584
Daniel L. James, Risto Miikkulainen 5 CONCLUSION SARDNET is a novel feature map
architecture for classifying sequences of input vectors. Each sequence is mapped
on a distributed representation on the map, making it possible to pack a remarkable
large number of category representations on a small feature map . The representations
are not only dense, they also represent the similarities of the sequences, which
should turn out useful in cognitive science as well as real-world applications
of the architecture. Acknowledgments Thanks to Jon Hilbert for converting CELEX
data into the International Phonetic Alphabet format used in the experiments.
This research was supported in part by the National Science Foundation under grant
IRI-9309273. References Chappel , G. J., and Taylor, J. G. (1993). The temporal
Kohonen map. Neural Kangas, J. (1991). Time-dependent self-organizing maps for
speech recognition. In Proceedings of the International Conference on Artificial
Neural Networks (Espoo, Finland), 1591-1594. Amsterdam; New York: North-Holland.
Kohonen, T. (1989). Self-Organization and Associative Memory. Berlin; Heidelberg;
New York: Springer. Third edition. Kohonen, T . (1990). The self-organizing map.
Proceedings of the IEEE, 78:1464- Samarabandu, J. K., and Jakubowicz, O. G . (1990).
Principles of sequential fea ture maps in multi-level problems. In Proceedings
of the International Joint Conference on Neural Networks (Washington, DC), vol.
II, 683-686. Hillsdale, NJ: Erlbaum. Scholtes, J. C. (1991). Recurrent Kohonen
self-organization in natural language processing. In Proceedings of the International
Conference on Artificial Neu ral Networks (Espoo, Finland), 1751-1754. Amsterdam;
New York: North Holland. van Harmelen, H. (1993). Time dependent self-organizing
feature map for speech recognition. Master''s thesis, University of Twente, Enschede,
the Netherlands. Zandhuis, J. A . (1992). Storing sequential data in self-organizing
feature maps. Internal Report MPI-NL- TG-492, Max-Planck-Institute fur Psycholinguistik,
Nijmegen, the Netherlands.'
- 'INTRODUCTION Measurement of facial expressions is important for research and
assessment psychi atry, neurology, and experimental psychology (Ekman, Huang,
Sejnowski, Hager, 1992), and has technological applications in consumer-friendly
user interfaces, inter active video and entertainment rating. The Facial Action
Coding System (FACS) is a method for measuring facial expressions in terms of
activity in the underlying facial muscles (Ekman Friesen, 1978). We are exploring
ways to automate FACS. 824 BARTLETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER,
EKMAN Rather than classifying images into emotion categories such as happy, sad,
or sur prised, the goal of this work is instead to detect the muscular actions
that comprise a facial expression. FACS was developed in order to allow researchers
to measure the activity of facial muscles from video images of faces. Ekman and
Friesen defined 46 distinct action units, each of which correspond to activity
in a distinct muscle or muscle group, and produce characteristic facial distortions
which can be identified in the images. Although there are static cues to the facial
actions, dynamic information is a critical aspect of facial action coding. FACS
is currently used as a research tool in several branches of behavioral science,
but a major limitation to this system is the time required to both train human
experts and to manually score the video tape. Automating the Facial Action Coding
System would make it more widely accessible as a research tool, and it would provide
a good foundation for human-computer interactions tools. Why Detect Facial Actions?
Most approaches to facial expression recognition by computer have focused on clas
sifying images into a small set of emotion categories such as happy, sad, or surprised
(Mase, 1991; Yacoob Davis, 1994; Essa Pentland, 1995). Real facial signals,
however, consist ofthousands of distinct expressions, that differ often in only
subtle ways . These differences can signify not only which emotion is occurring,
but whether two or more emotions have blended together, the intensity of the emotion(s),
and if an attempt is being made to control the expression of emotion (Hager Ekman
, An alternative to training a system explicitly on a large number of expression
cat egories is to detect the facial actions that comprise the expressions. Thousands
of facial expressions can be defined in terms of this smaller set of structural
compo nents. We can verify the signal value of these expressions by reference
to a large body of behavioral data relating facial actions to emotional states
which have al ready been scored with FACS. FACS also provides a meanS for obtaining
reliable training data. Other approaches to automating facial measurement have
mistakenly relied upon voluntary expressions, which tend to contain exaggerated
and redundant cues, while omitting some muscular actions altogether (Hager Ekman,
1995). 2 IMAGE DATABASE We have collected a database of image sequences of subjects
performing specified facial actions. The full database contains over 1100 sequences
containing over 150 distinct actions, or action combinations, and 24 different
subjects. The sequences contain 6 images, beginning with a neutral expression
and ending with a high in tensity muscle contraction (Figure 1). For our initial
investigation we used data from 20 subjects and attempted to classify the six
individual upper face actions illustrated in Figure 2. The information that is
available in the images for detecting and discriminating these actions include
distortions in the shapes and relative po sitions of the eyes and eyebrows, the
appearance of wrinkles, bulges, and furrows, in specific regions of the face,
and motion of the brows and eyelids. Prior to classifying the images, we manually
located the eyes, and we used this information to crop a region around the upper
face and scale the images to 360 x 240. The images were rotated so that the eyes
were horizontal, and the luminance was normalized. Accurate image registration
is critical for principal components based approaches. For the holistic analysis
and flow fields, the images were further scaled Classifying Facial Action 825
to 22 x 32 and 66 x 96, respectively. Since the muscle contractions are frequently
asymmetric about the face, we doubled the size of our data set by reflecting each
image about the vertical axis, giving a total of 800 images. Figure 1: Example
action sequences from the database. Figure 2: Examples of the six actions used
in this study. AU 1: Inner brow raiser. 2: Outer brow raiser. 4: Brow lower. 5:
Upper lid raiser (widening the eyes). 6: Cheek raiser. 7: Lid tightener (partial
squint). 3 HOLISTIC SPATIAL ANALYSIS The Eigenface (Thrk Pentland, 1991) and
Holon (Cottrell Metcalfe, 1991) representations are holistic representations
based on principal components, which can be extracted by feed forward networks
trained by back propagation. Previous work in our lab and others has demonstrated
that feed forward networks taking such holistic representations as input can successfully
classify gender from facial images (Cottrell Metcalfe, 1991; Golomb, Lawrence, Sejnowski,
1991). We evaluated the ability of a back propagation network to classify facial
actions given principal components of graylevel images as input. The primary difference
between the present approach and the work referenced above is that we take the
principal components of a set of difference images, which we obtained by subtracting
the first image in the sequence from the subsequent images (see Figure 3). The
variability in our data set is therefore due to the facial distortions and individual
differences in facial distortion, and we have removed variability due to surface-level
differences in appearance. We projected the difference images onto the first N
principal components of the dataset, and these projections comprised the input
to a 3 layer neural network with 10 hidden units, and six output units, one per
action (Figure 3.) The network is feed forward and fully connected with a hyperbolic
tangent transfer function, and was trained with conjugate gradient descent. The
output of the network was determined using winner take all, and generalization
to novel subjects was determined by using the leave-one-out, or jackknife, procedure
in which we trained the network on 19 subjects and reserved all of the images
from one subject for testing. This process was repeated for each of the subjects
to obtain a mean generalization performance across 20 test cases. 826 BARTLETI,
VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN We obtained the best performance
with 50 component projections, which gave 88.6 correct across subjects. The benefit
obtained by using principal components over the 704-dimensional difference images
themselves is not large. Feeding the difference images directly into the network
gave a performance of 84 correct. 6 OUtputs I WT A Figure 3: Left: Example difference
image. Input values of -1 are mapped to black and 1 to white. Right: Architecture
of the feed forward network. 4 FEATURE MEASUREMENT We turned next to explicit
measurement of local image features associated with these actions. The presence
of wrinkles in specific regions of the face is a salient cue to the contraction
of specific facial muscles. We measured wrinkling at the four facial positions
marked in Figure 4a, which are located in the image automatically from the eye
position information. Figure 4b shows pixel intensities along the line segment
labeled A, and two major wrinkles are evident. We defined a wrinkle measure P
as the sum of the squared derivative of the intensity values along the segment
(Figure 4c.) Figure 4d shows P values along line segment A, for a subject performing
each of the six actions. Only AU 1 produces wrinkles in the center of the forehead.
The P values remain at zero except for AU 1, for which it increases with increases
in action intensity. We also defined an eye opening measure as the area of the
visible sclera lateral to the iris. Since we were interested in changes in these
measures from baseline, we subtract the measures obtained from the neutral image.
Pixel Image in Seqence Figure 4: a) Wrinkling was measured at four image locations,
A-D. b) Smoothed pixel intensities along the line labeled A. c) Wrinkle measure.
d) P measured at image location A for one subject performing each of the six actions.
We classified the actions from these five feature measures using a 3-layer neural
net with 15 hidden units. This method performs well for some subjects but not
for Classifying Facial Action 827 Figure 5: Example flow field for a subject performing
AU 7, partial closure of the eyelids. Each flow vector is plotted as an arrow
that points in the direction of motion. Axes give image location. others, depending
on age and physiognomy. It achieves an overall generalization performance of 57
correct. 5 OPTIC FLOW The motion that results from facial action provides another
important source of information. The third classifier attempts to classify facial
actions based only on the pattern of facial motion. Motion is extracted from image
pairs consisting of a neutral image and an image that displays the action to be
classified. An approximation to flow is extracted by implementing the brightness
constraint equation (2) where the velocity (vx,Vy) at each image point is estimated
from the spatial and temporal gradients of the image I. The velocities can only
be reliably extracted at points of large gradient, and we therefore retain only
the velocities from those locations. One of the advantages of this simple local
estimate of flow is speed. It takes 0.13 seconds on a 120 MHz Pentium to compute
one flow field. A resulting flow image is illustrated in Figure 5. We obtained
weighted templates for each of the actions by taking mean flow fields from 10
subjects. We compared novel flow patterns, r to the template ft by the similarity
measure S (3). S is the normalized dot product of the novel flow field with the
template flow field. This template matching procedure gave 84.8 accuracy for novel
subjects. Performance was the same for the ten subjects used in the training 6
COMBINED SYSTEM Figure 6 compares performance for the three individual methods
described in the previous sections. Error bars give the standard deviation for
the estimate of gener alization to novel subjects. We obtained the best performance
when we combined all three sources of information into a single neural network.
The classifier is a 828 BAR1LETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN
I 6 Output I WTA Classifier Figure 6: Left: Combined system architecture. Right:
Performance comparisons. Holistic v. Flow Feature v. Row Feature v. Holistic Figure
7: Performance correlations among the three individual classifiers. Each data
point is performance for one of the 20 subjects. feed forward network taking 50
component projections, 5 feature measures, and 6 template matches as input (see
Figure 6.) The combined system gives a generalization performance of 92, which
is an im provement over the best individual method at 88.6. The increase in performance
level is statistically significant by a paired t-test. While the improvement is
small, it constitutes about 30 of the difference between the best individual classifier
and perfect performance. Figure 6 also shows performance of human subjects on
this same dataset. Human non-experts can correctly classify these images with
about 74 accuracy. This is a difficult classification problem that requires considerable
training for people to be able to perform well. We can examine how the combined
system benefits from multiple input sources by looking at the cprrelations in
performance of the three individual classifiers. Combining estimators is most
beneficial when the individual estimators make very different patterns of errors.1
The performance of the individual classifiers are com pared in Figure 7. The holistic
and the flow field classifiers are correlated with a coefficient of 0.52. The
feature based system, however, has a more independent pattern of errors from the
two template-based methods. Although the stand-alone performance of the feature
based system is low, it contributes to the combined system because it provides
estimates that are independent from the two template-based systems. Without the
feature measures, we lose 40 of the improvement. Since we have only a small number
of features, this data does not address questions about whether templates are
better than features, but it does suggest that local features plus templates may
be superior to either one alone, since they may have independent patterns of errors.
iTom Dietterich, Connectionists mailing list, July 24, 1993. Classifying Facial
Action 829 7 DISCUSSION We have evaluated the performance of three approaches
to image analysis on a dif ficult classification problem. We obtained the best
performance when information from holistic spatial analysis, feature measurements,
and optic flow fields were com bined in a single system. The combined system classifies
a face in less than a second on a 120 MHz Pentium. Our initial results are promising
since the upper facial actions included in this study represent subtle distinctions
in facial appearance that require lengthy training for humans to make reliably.
Our results compare favorably with facial expression recognition systems developed
by Mase (1991), Yacoob and Davis (1994), and Pad gett and Cottrell (1995), who
obtained 80, 88, and 88 accuracy respectively for classifying up to six full face
expressions. The work presented here differs from these systems in that we attempt
to detect individual muscular actions rather than emo tion categories, we use
a dataset of labeled facial actions, and our dataset includes low and medium intensity
muscular actions as well as high intensity ones. Essa and Pentland (1995) attempt
to relate facial expressions to the underlying musculature through a complex physical
model of the face. Since our methods are image-based, they are more adaptable
to variations in facial structure and skin elasticity in the subject population.
We intend to apply these techniques to the lower facial actions and to action
com binations as well. A completely automated method for scoring facial actions
from images would have both commercial and research applications and would reduce
the time and expense currently required for manual scoring by trained observers.
Acknow ledgments This research was supported by Lawrence Livermore National Laboratories,
Intra University Agreement B291436, NSF Grant No. BS-9120868, and Howard Hughes
Medical Institute. We thank Claudia Hilburn for image collection. References Cottrell,
G., Metcalfe, J. (1991): Face, gender and emotion recognition using holons. In
Advances in Neural Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo:
Ekman, P., Friesen, W. (1978): Facial Action Coding System: A Technique for the
Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press.
Ekman, P., Huang, T., Sejnowski, T., Hager, J. (1992): Final Report to NSF of
the Planning Workshop on Facial Expression Understanding. Available from HIL-0984,
UCSF, San Francisco, CA 94143. Essa, I., Pentland, A. (1995). Facial expression
recognition using visually extracted facial action parameters. Proceedings of
the International Workshop on Automatic Face- and Gesture-Recognition. University
of Zurich, Multimedia Laboratory. Golomb, B., Lawrence, D., Sejnowski, T. (1991).
SEXnet: A neural network identifies sex from human faces. In Advances in Neural
Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo: Morgan Kaufman:
572 - 577. Hager, J., Ekman, P., (1995). The essential behavioral science of
the face and gesture that computer scientists need to know. Proceedings of the
International Workshop on Automatic Face-and Gesture-Recognition. University of
Zurich, Multimedia Laboratory. Mase, K. (1991): Recognition of facial expression
from optical flow. IEICE Transactions Padgett, C., Cottrell, G., (1995). Emotion
in static face images. Proceedings of the Institute for Neural Computation Annual
Research Symposium, Vol 5. La Jolla, CA. Turk, M., Pentland, A. (1991): Eigenfaces
for Recognition. Journal of Cognitive Neu Yacoob, Y., Davis, L. (1994): Recognizin
human facial expression. University of Maryland Center for Automation Research
Technical Report No. 706.'
- 'Introduction Certain classification problems, such as recognizing the digits
of a hand written zip code, require the assignment of each object to a class.
Others, involving relatively small amounts of data and high risk, call for indecision
until more data become available. Examples in such areas as medical diagnosis,
stock trading and radar detection are well known. The training data for the classifier
in both cases will correspond to firmly labeled members of the competing classes.
(A patient may be Presently a Senior Research Associate of the National Research
Council at M . S. 210-9, NASA Ames Research Center, Moffett Field, CA 94035, on
sabbatical leave from the Technion. Consistent Classification, Firm and Soft 327
either ill or healthy. A stock price may increase, decrease or stay the same).
Yet, the classification of new objects need not be firm. (A given patient may
be kept in hospital for further observation. A given stock need not be bought
or sold every day). We call classification of the first kind "firm" and classification
of the second kind "soft". The latter is not the same as training the classifier
with a "don''t care" option, which would be just another firm labeling option,
as "yes" and "no", and would require firm classification. A classifier that correctly
classifies the training data is called "consistent". Consistent classifier reductions
have been considered in the contexts of the nearest neighbor criterion (Hart,
1968) and decision trees (Holte, In this paper we present a geometric approach
to consistent firm and soft classifi cation. The classifiers are based on unions
of local separators, which cover all the labeled points of a given class, and
separate them from the others. We propose a consistent reduction of the nearest
neighbor classifier and derive its expected design complexity and the expected
classifier size. The nearest neighbor classifier and its consistent derivatives
perform "firm" classification. Soft classification is performed by unions of maximal
-volume spherical local separators. A domain of indecision is created near the
boundary between the two sets of class-labeled points, and in regions where there
is no data. We propose an economically motivated benefit func tion for a classifier
as the difference between the probabilities of success and failure. Employing
the respective benefit functions, the advantage of soft classification over firm
classification is shown to depend on the rate of indecision. The performances
of the proposed algorithms in predicting stock behavior are compared to those
of the nearest neighbor method. 2 Consistent Firm Classification Consider a finite
set of points X {X(i), i 1, ... , N} in some subset of Rn, the real space of
dimension n . Suppose that each point of X is assigned to one of two classes,
and let the corresponding subsets of X, having N1 and N2 points, respectively,
be denoted Xl and X 2 We shall say that the two sets are labeled L1 and L 2 ,
respectively. It is desired to divide Rn into labeled regions, so that new, .
unlabeled points can be assigned to one of the two classes. We define a local
separator of a point x of Xl with respect to X 2 as a convex set, s(xI2), which
contains x and no point of X2. A separator family is defined as a rule that produces
local separators for class-labeled points. We call the set of those points of
Rn that are closer to a point x E Xl than to any point of X2 the minimum-distance
local separator of x with respect to X2. We define the local clustering degree,
c, of the data as the expected fraction of data points that are covered by a local
minimum -distance separator. The nearest neighbor criterion extends the class
assignment of a point x E Xl to its minimum-distance local separator. It is clearly
a consistent and firm classifier whose memory size is O(N). Hart''s Condensed
Nearest Neighbor (CNN) classifier (Hart, 1968) is a consis tent subset of the
data points that correctly classifies the entire data by the nearest neighbor
method. It is not difficult to show that the complexity of the algorithm 328 Y.
Baram proposed by Hart for finding such a subset is O(N3). The expected memory
re quirement (or classifier size) has remained an open question. We propose the
following Reduced Nearest Neighbor (RNN) classifier: include a labeled point in
the consistent subset only if it is not covered by the minimum distance local
separator of any of the points of the same class already in the subset. It can
be shown (Baram, 1996) that the complexity of the RNN algorithm is O(N2). and
that the expected classifier size is O(IOgl(I-C) N). It can also be shown that
the latter bounds the expected size of the CNN classifier as well. It has been
suggested that the utility of the Occam''s razor in classification would "Given
a choice between two plausible classifiers that perform identically on the data
set, the simpler classifier is expected to classify correctly more objects outside
the training set". The above statement is disproved by the CNN and the RNN classifiers,
which are strict consistent reductions of the nearest neighbor classifier, likely
to produce more errors. 3 Soft Classification: Indecision Pays, Sometimes When
a new, unlabeled, point is closely surrounded by many points of the same class,
its assignment to the same class can be said to be unambiguously supported by
the data. When a new point is surrounded by points of different classes, or when
it is relatively far from any of the labeled points, its assignment to either
class can be said to be unsupported or ambiguously supported by the data. In the
latter cases, it may be more desirable to have a certain indecision domain, where
new points will not be assigned to a class. This will translate into the creation
of indecision domains near the boundary between the two sets of labeled points
and where there is no data. We define a separntor S(112) of Xl with respect to
X2 as a set that includes Xl and excludes X2. Given a separator family, the union
of local separators S(x(i) 12) of the points is a separator of Xl with respect
to X2. It consists of NI local separators. Let XI,c be a subset of Xl. The set
will be called a consistent separator of Xl with respect to X2 if it contains
all the points of X 1. The set XI,c will then be called a consistent subset with
respect to the given separator family. Let us extend the class assignment of each
of the labeled points to a local separator of a given family and maximize the
volume of each of the local separators without Consistent Classification, Finn
and Soft 329 including in it any point of the competing class. Let Sc(112) and
Sc(211) be consis tent separators of the two sets, consisting of maximal-volume
(or, simply, maximaQ local separators of labeled points of the corresponding classes.
The intersection of Sc(112) and Sc(211) defines a conflict and will be called
a domain of ambiguity of the first kind. A region uncovered by either separator
will be called a domain of ambiguity of the second kind. The union of the domains
of ambiguity will be des ignated the domain of indecision. The remainders of the
two separators, excluding their intersection, define the conflict-free domains
assigned to the two classes. The resulting "soft" classifier rules out hard conflicts,
where labeled points of one class are included in the separator of the other.
Yet, it allows for indecision in areas which are either claimed by both separators
or claimed by neither. Let the true class be denoted y (with possible values,
e.g., y1 or y2) and let the classification outcome be denoted y. Let the probabilities
of decision and indecision by the soft classifier be denoted Pd and Pid, respectively
(of course, P id 1 - Pd), and let the probabilities of correct and incorrect
decisions by the firm and the soft classifiers be denoted Pfirm {y y}, Pfirm
{y P y}, P soft {y y} and Psoft {y P y}, respectively. Finally, let the joint
probabilities of a decision being made by the soft classifier and the correctness
or incorrectness of the decision be denoted, respec tively, Psoft { d, Y y} and
P soft { d, Y P y} and let the corresponding conditional probabilities be denoted
Psoft {y y I d} and Psoft {y P y I d}, respectively. We define the benefit of
using the firm classifier as the difference between the prob ability that a point
is classified correctly by the classifier and the probability that it is misclassified:
This definition is motivated by economic consideration: the profit produced by
an investment will be, on average, proportional to the benefit function. This
will become more evident in a later section, were we consider the problem of stock
trading. For a soft classifier, we similarly define the benefit as the difference
between the probability of a correct classification and that of an incorrect one
(which, in an economic context, assumes that indecision has no cost, other than
the possible loss of profit). Now, however, these probabilities are for the joint
events that a classification is made, and that the outcome is correct or incorrect,
respectively: Soft classification will be more beneficial than firm classification
if Bsoft Bfirm'' which may be written as For the latter to be a useful condition,
it is necessary that Pfirm {y y} 0.5, Psofdy y I d} 0.5 and Psoft {y y I
d} Pfirm {y y}. The latter will be normally satisfied, since points of the same
class can be expected to be denser under the corresponding separator than in the
indecision domain. In other words, 330 Y. Baram the error ratio produced by the
soft classifier on the decided cases can be expected to be smaller than the error
ratio produced by the firm classifier, which decides on all the cases. The satisfaction
of condition (5) would depend on the geometry of the data. It will be satisfied
for certain cases, and will not be satisfied for others. This will be numerically
demonstrated for the stock trading problem. The maximal local spherical separator
of x is defined by the open sphere centered at x, whose radius r(xI2) is the distance
between x and the point of X2 nearest to x. Denoting by s(x, r) the sphere of
radius r in Rn centered at x, the maximal local separator is then sM(xI2) s(x,
r(xI2)). A separator construction algorithm employing maximal local spherical
separators is described below. Its complexity is clearly O(N2). Let Xl Xl. For
each of the points xci) of Xl, find the minimal distance to the points of X 2 Call
it r(x(i) 12). Select the point x(i) for which r(x(i) 12) 2: r(x(j) 12), j f:
i, for the consistent subset. Eliminate from Xl all the points that are covered
by SM(X(i) 12). Denote the remaining set Xl. Repeat the procedure while Xl is
non-empty. The union of the maximal local spherical separators is a separator
for Xl with respect to X 2 . 4 Example: Firm and soft prediction of stock behaviour
Given a sequence of k daily trading ("close") values of a stock, it is desired
to predict whether the next day will show an increase or a decrease with respect
to the last day in the sequence. Records for ten different stocks, each containing,
on average, 1260 daily values were used. About 60 percent of the data were used
for training and the rest for testing. The CNN algorithm reduced the data by 40
while the RNN algorithm reduced the data by 35. Results are show in Fig. 1. It
can be seen that, on average, the nearest neighbor method has produced the best
results. The performances of the CNN and the RNN classifiers (the latter producing
only slightly better results) are somewhat lower. It has been argued that performance
within a couple of percentage points by a reduced classifier supports the utility
of Occam''s razor (Holte, 1993). However, a couple of percentage points can be
quite meaningful in stock trading. In order to evaluate the utility of soft classification
in stock trading, let the predic tion success rate of a firm classifier, be denoted
f and that of a soft classifier for the decided cases s. For a given trade, let
the gain or loss per unit invested be denoted q, and the rate of indecision of
the soft classifier ir. Suppose that, employing the firm classifier, a stock is
traded once every day (say, at the "close" value), and that, employing the soft
classifier, it is traded on a given day only if a trade is decided by the classifier
(that is, the input does not fall in the indecision domain). The expected profit
for M days per unit invested is 2(1 - 0.5)qM for the firm classifier and 2(s -
0.5)q(l-ir)M for the soft classifier (these values disregard possible com mission
and slippage costs). The soft classifier will be preferred over the firm one if
the latter quantity is greater than the former, that is, if which is the sample
representation of condition (5) for the stock trading problem. Consistent Classification,
Firm and Soft 331 ni . llIifip. llCce bene.fit Figure 1: Success rates in the
prediction of rize and fall in stock values. Results for the soft classifier,
applied to the stock data, are presented in Fig. 1. The indecision rates and the
success rates in the decided cases are then specified along with a benefit sign.
A positive benefit represents a satisfaction of condition (6), with ir, f and
s replaced by the corresponding sample values given in the table. This indicates
a higher profit in applying the soft classifier over the application of the nearest
neighbor classifier. A negative benefit indicates that a higher profit is produced
by the nearest neighbor classifier. It can be seen that for two of the stocks
(xdssi and xelrnf) soft classification has produced better results than firm classification,
and for the remaining eight stocks finn classification by the nearest neighbor
method has produced better results. 5 Conclusion Solutions to the consistent classification
problem have been specified in tenns of local separators of data points of one
class with respect to the other. The expected complexities of the proposed algorithms
have been specified, along with the ex pected sizes of the resulting classifiers.
Reduced consistent versions of the nearest neighbor classifier have been specified
and their expected complexities have been derived. A notion of "soft" classification
has been introduced an algorithm for its implementation have been presented and
analyzed. A criterion for the utility of such classification has been presented
and its application in stock trading has been demonstrated. Acknowledgment The
author thanks Dr. Amir Atiya of Cairo University for providing the stock data
used in the examples and for valuable discussions of the corresponding results.
332 y. Baram References Baram Y. (1996) Consistent Classification, Firm and Soft,
CIS Report No. 9627, Center for Intelligent Systems, Technion, Israel Institute
of Technology, Haifa 32000, Israel. Baum, E. B . (1988) On the Capabilities of
Multilayer Perceptrons, J. Complexity, Hart, P. E. (1968) The Condensed Nearest
Neighbor Rule, IEEE Trans. on Infor Holte, R. C. (1993) Very Simple Classification
Rules Perform Well on Most Com monly Used databases, Machine Learning, Vol. 11,
No. 1 pp. 63 - 90. Rosenblatt, F. (1958) The Perceptron: A Probabilistic Model
for Information Stor age and Organization in the Brain, Psychological Review,
Vol. 65, pp. 386 - 408. Webb, G. 1. (1996) Further Experimental Evidence against
the Utility of Occam''s Razor, J. of Artificial Intelligence Research 4, pp. 397
- 147.'
- source_sentence: Functional role of neurons in primary auditory cortex
sentences:
- 'Introduction Learning in biological systems is of great importance. But while
cognitive learning (or "problem solving") is typically abrupt and generalizes
to analogous problems, perceptual skills appear to be acquired gradually and specifically:
Human subjects cannot generalize a perceptual discrimination skill to solve similar
problems with different attributes. For example, in a visual discrimination task
(Fig. 1), a subject who is trained to discriminate motion directions between 43
and 47 cannot use 46 Z. Liu and D . Weinshall this skill to discriminate 133 from
137. Generalization has been found only when stimuli of different attributes are
interleaved [7, 10], or when the task is easier [6, 1]. For example, a subject
who is trained to discriminate 41 from 49 can later readily discriminate 131
from 139 [6]. The specificity of learning has been so far used to support the
hypothesis that perceptual learning embodies neuronal modifications in the brain''s
stimulus-specific cortical areas (e.g., visual area MT) [9,3, 2, 5, 8, 4]. In
contrast to previous results of learning specificity, we show in two experiments
in Section 2 that learning in motion discrimination generalizes in all cases where
speci ficity was thought to exist, although the mode of generalization varies.
(1) When the task is difficult, it is direction specific in the traditional sense;
but learning in a new direction accelerates. (2) When the task is easy, it generalizes
to all direc tions after training in only one direction. While (2) is consistent
with the findings reported in [6, 1], (1) demonstrate that generalization is the
rule, not an exception limited only to "easy" stimuli. 2 Perceptual learning experiments
stimUIUs''-- - -response time SOOms Figure 1: Schematic of one trial. Left:
the stimulus was a random dot pattern viewed in a circular aperture, spanning
8 of visual angle, moving in a given primary direction (denoted dir). The primary
direction was chosen from 12 directions, separated by 30. Right: the direction
of each of the two stimuli was randomly chosen from two candidate directions (dir D.2).
The subject judged whether the two stimuli moved in the same or different directions.
Feedback was provided. The motion discrimination task is described in Fig. 1.
In each trial, the subject was presented with two consecutive stimuli, each moving
in one of two possible directions (randomly chosen from the two directions dir 2
and dir - 2). The directional difference II between the two stimuli was 8 in the
easy condition, and 4 in the difficult condition. The experiment was otherwise
identical to that in [2] that used II 3, except that our stimuli were displayed
on an SGI computer monitor. II 8 was chosen as the easy condition because most
subjects found it relatively easy to learn, yet still needed substantial training.
2.1 A difficult task We trained subjects extensively in one primary direction
with a difficult motion discrimination task ( 4), followed by extensive training
in a second primary direction. The two primary directions were sufficiently different
so direct trans fer between them was not expected [2] (Fig. 2). Subjects'' initial
performance in both directions was comparable, replicating the classical result
of stimulus specific learning (no direct transfer). However, all subjects took
only half as many train ing sessions to make the same improvement in the second
direction. All subjects had extensive practice with the task prior to this experiment,
thus the acceleration cannot be simply explained by familiarity. Mechanisms of
Generalization in Perceptual Learning 47 Our results show that although perceptual
learning did not directly transfer in this difficult task, it did nevertheless
generalize to the new direction. The generalization was manifested as 100 increase
in the rate of learning in the second direction. It demonstrates that the generalization
of learning, as manifested via direct transfer and via increase in learning rate,
may be thought of as two extremes of a continuum of possibilities. S.sslon S ...
lon Figure 2: Subjects DJ and ZL needed 20 training sessions in the first direction,
and nine in the second; subject ZJX needed seven training sessions in the first,
and four in the second. The rate of learning (the amount of improvement per session)
in the second direction is significantly greater than in the first (t(2) 13.41,p 0.003).
We first measured the subjects'' baseline performance in an easy task - the dis
crimination of motion directions 8 apart - in 12 primary directions (64 trials
each, randomly interleaved). We then trained four subjects in one oblique primary
direction (chosen randomly and counter-balanced among subjects) for four sessions,
each with 700 trials. Finally, we measured again the subjects'' performance in
all directions. Every subject improved in all directions (Fig. 3). The performance
of seven control subjects was measured without intermediate training; two more
con trol subjects were added who were "trained" with similar motion stimuli but
were asked to discriminate a brightness change instead. The control subjects improved
as well, but significantly less (!ld'' 0.09 vs. 0.78, Fig. 3). Our results clearly
show that training with an easy task in one direction leads to immediate improvement
in other directions. Hence the learned skill generalized across motion directions.
3 A computational model We will now adopt a general framework for the analysis
of perceptual learning results, using the language of signal detection theory.
Our model accounts for the results in this paper by employing the constraint of
limited computational resources. The model''s assumptions are as follows. 1. In
each trial, each of the two stimuli is represented by a population of measure
ments that encode all aspects of the stimulus, in particular, the output of localized
direction detectors. The measurements are encoded as a vector. The decision as
to whether the two stimuli are the same or not is determined by the difference
of the two vectors. 2. Each component of the input measurements is characterized
by its sensitivity for the discrimination task, e.g., how well the two motion
directions can be dis criminated apart based on this component. The entire population
itself is generally divided into two sets: informative - measurements with significant
sensitivity, and 48 Z. Liu and D. Weinshall 270 Sltjects Figure 3: Left: Discrimination
sensitivity d'' of subject JY who was trained in the primary direction 3000 Middle
: d'' of control subject YHL who had no training in between the two measurements.
Right: Average d'' (and standard error) for all subjects before and after training.
Trained: results for the four trained subjects. Note the substantial improvement
between the two measurements. For these subjects, the d'' measured after training
is shown separately for the trained direction (middle column) and the remaining
directions (right column). Control: results for the nine control subjects. The
control subjects improved their performance significantly less than the trained
subjects (tld'' uninformative - measurements with null sensitivity. In addition,
informative mea surements may vary greatly in their individual sensitivity. When
many have high sensitivity, the task is easy. When most have low sensitivity,
the task is difficult. We assume that sensitivity changes from one primary direction
to the next, but the population of informative measurements remains constant.
For example, in our psychophysical task localized directional signals are likely
to be in the informative set for any motion direction, though their individual
sensitivity will vary based on specific motion directions. On the other hand,
local speed signals are never informative and therefore always belong to the uninformative
set. 3. Due to limited computational capacity, the system can, at a time, only
process a small number of components of the input vector. The decision in a single
trial is therefore made based on the magnitude of this sub-vector, which may vary
from trial to trial. In each trial the system rates the processed components of
the sub-vector according to their sensitivity for the discrimination task. After
a sufficient number of trials (enough to estimate all the component sensitivities
of the sub-vector), the system identifies the least sensitive component and replaces
it in the next trial with a new random component from the input vector. In effect,
the system is searching from the input vector a sub-vector that gives rise to
the maximal discrimination sensitivity. Therefore the performance of the system
is gradually improving, causing learning from session to session in the training
direction. 4. After learning in one training direction, the system identifies
the sets of in formative and uninformative measurements and include in the informative
set any measurement with significant (though possibly low) sensitivity. In the
next training direction, only the set of informative measurements is searched.
The search becomes more efficient, and hence the acceleration of the learning
rate. This accounts for the learning between training directions. We further assume
that each stimulus generates a signal that is a vector of N measurements: {Idl''
We also assume that the signal for the discrimination task is the difference between
two stimulus measurements: x {Xi}l'' Xi tlli. The Mechanisms of Generalization
in Perceptual Learning 49 samedifferent discrimination task is to decide whether
x is generated by noise - the null vector 0, or by some distinct signal - the
vector S. At time t a measurement vector xt is obtained, which we denote x st
if it is the signal S, and xnt otherwise. Assume that each measurement in xt is
a normal We measure the sensitivity d'' of each component. Since both the signal
and noise are assumed to be normal random variables, the sensitivity of the i-th
measurement in the discrimination task is d lJ.lilai. Assuming further that the
measurements are independent of each other and of time, then the combined sensitivity
of M measurements is d'' JL:l (J.ldai)2. 3.1 Limited resources: an assumption
We assume that the system can simultaneously process at most M « N of the original
N measurements. Since the sensitivity d of the different measurements varies,
the discrimination depends on the combined sensitivity of the particular set of
M measurements that are being used. Learning in the first training direction,
therefore, leads to the selection of a "good" subset of the measurements, obtained
by searching in the measurement space. After searching for the best M measurements
for the current training direction, the system divides the measurements into two
sets: those with non-negligible sensitivity, and those with practically null sensitivity.
This rating is kept for the next training direction, when only the first set is
searched. One prediction of this model is that learning rate should not increase
with exposure only. In other words, it is necessary for subjects to be exposed
to the stimulus and do the same discrimination task for effective inter-directional
learning to take place. For example, assume that the system is given N measurements
: N 2 motion direction signals and N 2 speed signals. It learns during the first
training direction that the N 2 speed signals have null sensitivity for the direction
discrimination task, whereas the directional signals have varying (but significant)
sensitivity. In the second training direction, the system is given the N measurements
whose sensitivity profile is different from that in the first training direction,
but still with the property that only the directional signals have any significant
sensitivity (Fig. 4b). Based on learning in the first training direction, the
system only searches the measurements whose sensitivity in the first training
direction was significant, namely , the N 2 directional signals. It ignores the
speed signals. N ow the asymptotic performance in the second direction remains
unchanged because the most sensitive measurements are within the searched population
- they are directional signals. The learning rate, however, doubles since the
system searches a space half as large. 3.2 Simulation results To account for the
different modes of learning, we make the following assumptions. When the task
is easy, many components have high sensitivity d''. When the task is difficult,
only a small number of measurements have high d''. Therefore, when the task is
easy, a subset of M measurements that give rise to the best performance is found
relatively fast. In the extreme, when the task is very easy (e.g., all the mea
surements have very high sensitivity), the rate of learning is almost instantaneous
and the observed outcome appears to be transfer. On the other hand, when the task
is difficult, it takes a long time to find the M measurements that give rise to
the best performance, and learning is slow. 50 Z. Liu and D . Weinshall Figure
4: Hypothetical sensitivity profile for a population of measurements of speed
and motion direction. Left: First training direction - only the motion direction
measure ments have significant sensitivity (d'' above 0.1), with measurements
around 450 having the highest d''. Right: Second direction - only the motion direction
measurements have significant sensitivity, with measurements around 1350 having
the highest d''. The detailed operations of the model are as follows. In the first
training direction, the system starts with a random set of M measurements. In
each trial and using feedback, the mean and standard deviation of each measurement
is computed: J.L:t, ar for the signal and J.Lit, art for the noise. In the next
trial, given M measurements x as the signal if 5 0, and noise otherwise. At time
T, the worst measurement is identified as argval of mini d, d 21J.Lf - J.LiTI(ar art).
It is then replaced randomly from one of the remaining N - M measurements. The
learning and decision making then proceed as above for another T iterations. This
is repeated until the set of chosen measurements stabilizes. At the end, the decision
is made based on the set of M measurements that have the highest sensitivities.
Figure 5: Simulated performance (percent correct) as function of time. Left: Difficult
condition - the number of measurements with high d is small (4 out of 150); there
is no transfer from the first to the second training direction, but the learning
rate is increased two-fold. This graph is qualitatively similar to the results
shown in the top row of Fig. 2. Right: Easy condition - the number of measurements
with high d is large (72 out of 150); there is almost complete transfer from the
first to the secQnd training direction. At the very beginning of training in the
second direction, based on the measured d in the first direction, the measurement
population is labeled as informative - those with d larger than the median value,
and uninformative - the remaining measurements. The learning and decision making
proceeds as above, while only informative measurements are considered during the
search. In the simulation we used N 150 measurements, with M 4. Half of the
N measurements (the informative measurements) had significant d. In the second
training direction, the sensitivities of the measurements were randomly changed,
but only the informative measurements had significant d. By varying the number
of measurements with high di in the population of informative measurements, we
get the different modes of generalization(Fig. 5). Mechanisms of Generalization
in Perceptual Learning 51 4 Discussions In contrast to previous results on the
specificity of learning, we broadened the search for generalization beyond traditional
transfer. We found that generalization is the rule, rather than an exception.
Perceptual learning of motion discrimination generalizes in various forms: as
acceleration of learning rate (Exp. 1), as immediate improvement in performance
(Exp. 2). Thus we show that perceptual learning is more similar to cognitive learning
than previously thought, with both stimulus specificity and generalization as
important ingredients. In our scheme, the assumption of the computational resource
forced the discrimina tion system to search in the measurement space. The generalization
phenomena - transfer and increased learning rate - occur due to improvement in
search sensitiv ity from one training direction to the next, as the size of the
search space decreases with learning. Our scheme also predicts that learning rate
should only improve if the subject both sees the stimulus and does the relevant
discrimination task, in agreement with the results in Exp. 1. Importantly, our
scheme does not predict transfer per se, but instead a dramatic increase in learning
rate that is equivalent to transfer. Our model is qualitative and does not make
any concrete quantitative predictions. We would like to emphasize that this is
not a handicap of the model. Our goal is to show , qualitatively, that the various
generalization phenomena should not surprise us, as they should naturally occur
in a generic discrimination system with limited computational resources. Thus
we argue that it may be too early to use existing perceptual learning results
for the identification of the cortical location of perceptual learning, and the
levels at which modifications are taking place. References [1] Ahissar M and Hochstein
S. Task difficulty and the specificity of perceptual [2] Ball K and Sekuler R.
A specific and enduring improvement in visual motion [3] Fiorentini A and Berardi
N. Perceptual learning specific for orientation and [4] Gilbert C D. Early perceptual
learning. PNAS, 91:1195-1197, 1994. [5] Karni A and Sagi D. Where practice makes
perfect in texture discrimination: Evidence for primary visual cortex plasticity.
PNAS, 88:4966-4970, 1991. [6] Liu Z. Learning a visual skill that generalizes.
Tech. Report, NECI, 1995. [7] Liu Z and Vaina L M. Stimulus specific learning:
a consequence of stimulus specific experiments? Perception, 24(supplement):21,
1995. [8] Poggio T, Fahle M, and Edelman S. Fast perceptual learning in visual
hyper [9] Ramachandran V S. Learning-like phenomena in stereopsis. Nature, 262:382-
[10] Rubin N, Nakayama K, and Shapley R. Abrupt learning and retinal size specificity
in illusory-contour perception. Current Biology, 7:461-467,1997.'
- 'Introduction Application of mean-field theory to solve the problem of inference
in Belief Net works(BNs) is well known [1]. In this paper we will discuss a variational
mean-field theory and its application to BNs, sigmoidal BNs in particular. We
present a variational derivation of the mean-field theory, proposed by Plefka[2].
The theory will be developed for a stochastic system, consistin of N binary random
variables, Si E {O, I}, described by the energy function E(S), and the following
Boltzmann Gibbs distribution at a temperature T: The application of this mean-field
method to Boltzmann Machines(BMs) is already done [3]. A large class of BN s are
described by the following energy function: The application of the mean-field
theory for such energy functions is not straight forward and further approximations
are needed. We propose a new approximation scheme and discuss its utility for
sigmoid networks, which is obtained by substitut- f(x) 1 eX in the above energy
function. The paper is organized as follows. In section 2 we present a variational
derivation of Plefka''s mean-field theory. In section 3 the theory is extended
to sigmoidal belief networks. In section 4 empirical evaluation is done. Concluding
remarks are given in section 5. 2 A Variational mean-field theory Plefka,[2] proposed
a mean-field theory in the context of spin glasses. This theory can, in principle,
yield arbitrarily close approximation to log Z. In this section we present an
alternate derivation from a variational viewpoint, see also [4],[5]. Let ''Y be
a real parameter that takes values from 0 to 1. Let us define a ''Y dependent
partition and distribution function, Note that Zl Z and Pl p. Introducing an
external real vector, Blet us rewrite where Z is the partition function associated
with the distribution function p-y given Using Jensen''s Inequality, (e-X ) e-(x),
we get where Taking logarithms on both sides of (4) we obtain The right hand side
is defined as a function of u and ''Y via the following assumption. Invertibility
assumption: For each fixed u and ''Y, (5) can be solved for if If the invertibility
assumption holds then we can use u as the independent vector (with B dependent
on u) and rewrite (6) as where G is as defined in (7) This then gives a variational
feel: treat it as an external variable vector and choose it to minimize G for
a fixed ''Y. The stationarity conditions of the above minimization problem yield
At the minimum point we have the equality G - log Z"(. It is difficult to invert
(5) for''Y :I 0, thus making it impossible to write an algebraic expression for
G for any nonzero ''Y. At ''Y 0 the inversion is straightforward and one obtains
A Taylor series approach is then undertaken around ''Y 0 to build an approximation
to G. Define Then G M can be considered as an approximation of G. The stationarity
conditions are enforced by setting In this paper we will restrict ourselves to
M 2. To do this we need to evaluate the following derivatives where For M 1
we have the standard mean-field approach. The expression for M 2 can be identified
with the TAP correction. The term (10) yields the TAP term for BM energy function.
3 Mean-field approximations for BNs The method, as developed in the previous section,
is not directly useful for BNs because of the intractability of the partial derivatives
at ''Y O. To overcome this problem, we suggest an approximation based on Taylor
series expansion. Though in this paper we will be restricting ourselves to sigmoid
activation function, this method is applicable to other activation functions also.
This method enables cal culation of all the necessary terms required for extending
Plefka''s method for BN s. Since, for BN operation T is fixed to 1, T will be
dropped from all equations in the rest of the paper. Let us define a new energy
function where Since (3 is the important parameter, E((3, S, il, w) will be referred
to as E((3) so as to avoid notational clumsiness. We use a Taylor series approximation
of E((3) with respect to (3. Let us define If Ee approximates E, then we can write
Let us now define the following function The Bi are assumed to be functions of
il, (3, ''Y, which are obtained by inverting equations (12) By replacing E by
Ee in (15) we obtain Ae where the definition of il is obtained by replacing E
by Ee. In view of (14) one can consider Ae as an approximation to A. This observation
suggests an approximation The required terms needed in the Taylor expansion of
G in ''Y can be approximated The biggest advantage in working with Ae rather than
G is that the partial deriva tives of Ae with respect to ''Y at ''Y 0 and (3 1
can be expressed as functions of il. We define (18) Figure 1: Three layer BN (2
x 4 x 6) with top down propagation of beliefs. The activation function was chosen
to be sigmoid. In light of the above discussion one can consider G M :::::i a
MC j hence the mean-field equations can be stated as In this paper we will restrict
ourselves to M 2. The relevant objective functions for a general C is given by
All these objective functions can be expressed as a function of u. 4 Experimental
results To test the approximation schemes developed in the previous schemes, numerical
experiments were conducted. Saul et al.[l] pioneered the application of mean-field
theory to BNs. We will refer to their method as the SJJ approach. We compare our
schemes with the SJ J approach. Small Networks were chosen so that In Z can be
computed by exact enumeration for evaluation purposes. For all the experiments
the network topology was fixed to the one shown in figure 1. This choice of the
network enables us to compare the results with those of [1]. To compare the performance
of our methods with their method we repeated the experiment conducted by them
for sigmoid BNs. Ten thousand networks were generated by randomly choosing weight
values in [-1,1]. The bottom layer units, or the visible units of each network
were instantiated to zero. The likelihood, In Z, was computed by exact enumeration
of all the states in the higher two layers. The approximate value of - In Z was
computed by a MC j U was computed by solving the fixed point equations obtained
from (19). The goodness of approximation scheme was tested by the following measure
For a proper comparison we also implemented the SJJ method. The goodness of approximation
for the SJ J scheme is evaluated by substituting a MC, in (22) by Lsapprox, for
specific formula see [1]. The results are presented in the form of histograms
in Figure 2. We also repeated the experiment with weights and () () small weights
[-1, 1] large weights [-5,5] Table 1: Mean of for randomly generated sigmoid
networks, in different weight ranges. biases taking values between -5 and 5, the
results are again presented in the form of histograms in Figure 3. The findings
are summarized in the form of means tabulated in Table l. For small weights G12
and the SJJ approach show close results, which was expected. But the improvement
achieved by the G22 scheme is remarkable; it gave a mean value of 0.0029 which
compares substantially well against the mean value of 0.01139 reported in [6].
The improvement in [6] was achieved by using mixture distribution which requires
introduction of extra variational variables; more than 100 extra vari ational
variables are needed for a 5 component mixture. This results in substantial increase
in the computation costs. On the other hand the extra computational cost for G22
over G12 is marginal. This makes the G22 scheme computationally attractive over
the mixture distribution. Figure 2: Histograms for GlO and SJJ scheme for weights
taking values in [-1,1], for sigmoid networks. The plot on the left show histograms
for for the schemes Gu and G12 They did not have any overlaps; Gu , gives a mean
of -0.040 while G12 gives a mean of 0.0155. The middle plot shows the histogram
for the SJJ scheme, mean is given by 0.0157.The plot at the extreme right is for
the scheme G22 , having Of the three schemes G12 is the most robust and also yields
reasonably accurate results. It is outperformed only by G22 in the case of sigmoid
networks with low weights. Empirical evidence thus suggests that the choice of
a scheme is not straight forward and depends on the activation function and also
parameter values. Figure 3: Histograms for the G10 and SJJ schemes for weights
taking values in [-5,5] for sigmoid networks. The leftmost histogram shows for
G11 scheme having a mean of -0.0440, second from left is for G12 scheme having
a mean of 0.0231, and second from right is for SJJ scheme, having a mean of 0.0962.
The scheme G22 is at the extreme right with mean -0.0456. 5 Discussion Application
of Plefka''s theory to BNs is not straightforward. It requires compu tation of
some averages which are not tractable. We presented a scheme in which the BN energy
function is approximated by a Taylor series, which gives a tractable approximation
to the terms required for Plefka''s method. Various approximation schemes depending
on the degree of the Taylor series expansion are derived. Unlike the approach
in [1], the schemes discussed here are simpler as they do not introduce extra
variational variables. Empirical evaluation on small scale networks shows that
the quality of approximations is quite good. For a more detailed discussion of
these points see [7]. References [1] Saul, L . K. and Jaakkola, T. and Jordan,
M. 1.(1996), Mean field theory for sigmoid belief networks, Journal of Artificial
Intelligence Research,4 [2] Plefka, T . (1982), Convergence condition of the TAP
equation for the Infinite-ranged Ising glass model,J. Phys. A: Math. Gen.,15 [3]
Kappen, H. J and Rodriguez, F. B(1998), Boltzmann machine learning using mean
field theory and linear response correction, Advances in Neural Information Process
ing Systems 10, (eds.) M. I. Jordan and M. J. Kearns and S. A. Solla, MIT press
[4] Georges, A. and Yedidia, J. S.(1991), How to expand around mean-field theory
using high temperature expansions,J. Phys. A: Math. Gen., 24 [5] Bhattacharyya,
C. and Keerthi, S. S.(2000), Information geometry and Plefka''s mean field theory,
J. Phys. A: Math. Gen.,33 [6] Bishop, M. C. and Lawrence, N. and Jaakkola, T.
and Jordan, M. 1.(1997), Approxi mating Posterior Distributions in Belief Networks
using Mixtures, Advances in Neural Information Processing Systems 10, (eds.) Jordan,
M. I. and Kearns, M. J. and Solla, S., MIT press [7] Bhattacharyya, C. and Keerthi,
S. S. (1999), Mean field theory for a special class of belief networks, accepted
in Journal of Artificial Intelligence Research'
- 'Introduction It is known that auditory neurons are tuned for a number of independent
feature parameters of simple stimuli including frequency (Merzenich et al., 1973),
intensity (Sutter and Schreiner, 1995), amplitude modulation (Schreiner and Urbas,
1988), and Cha racterizing Auditory Cortical Ne urons Using Reverse Co rrelation
125 others. In addition, auditory cortical responses to multiple stimuli can enhance
or sup press one another in a time dependent fashion (Brosch and Schreiner, 1997;
Phillips and Cynader, 1985; Shamma and Symmes, 1985), and auditory cortical neurons
can be highly selective for species-specific vocalizations (Wang et al., 1995;
Wollberg and Newman, 1972), suggesting complex acoustic processing by these cells.
It is not yet known if these many independent selectivities of auditory cortical
neurons reflect a discernible underlying pattern of feature decomposition, as
has often been suggested (Merzenich et al., 1985; Schreiner and Mendelson, 1990;
Wang et al., 1995). Further, since sustained firing rate responses in the auditory
cortex to tonal stimuli are typ ically much lower than visual responses to drifting
bars (deCharms and Merzenich, 1996b), it has been suggested that the preferred
type of auditory stimulus may still not be known (Nelken et al., 1994). We sought
to develop an unbiased method for determining the full feature selectivity of
auditory cortical neurons, whatever it might be, in frequency and time based upon
reverse correlation. 2 Methods Recordings were made from a chronic array of up
to 49 individually placed ultra fine extracellular Iridium microelectrodes, placed
in the primary auditory cortex of the adult owl monkey. The electrodes had tip
lengths of 10-25microns, which yield impedance values of .5-SMOhm and good isolation
of signals from individual neurons or clusters of nearby neurons. We electrochemically
activated these tips to add an ultramicroscopic coating of Iridium Oxide, which
leaves the tip geometry unchanged, but decreases the tip impedance by more than
an order of magnitude, resulting in substantially improved recording signals.
These signals are filtered from .3-8kHz, sampled at 20kHz, digitized, and sorted.
The stimuli used were a variant of random V lsuII Cortn: Reveree Correlltlon U.lng
2D VI.nl Pltternl In Time SplkeT .. ln. Spltlotemporal Receptive Field Auditory
Cortex: Rever.e Correlltlon U.lng 1D Auditory Pltternl (Chordl) In Tim. Spectrotempoul
Receptive Field Figure 1: Schematic of stimuli used for reverse correlation. white
noise which was designed to allow us to characterize the responses of neurons
in time and in frequency. As shown in figure 1, these stimuli are directly analogous
to stimuli that have been used previously to characterize the response properties
of neurons in the primary visual cortex (Jones and Palmer, 1987; Reid and Alonso,
1995; Reid et al., 1991). In the visual case, stimuli consist of spatial checkerboards
that span some portion of the two-dimensional visual field and change pattern
with a short sampling interval. In the auditory case, which we have studied here,
the stimuli chosen were randomly selected chords, which approximately evenly span
a 126 R C. deChann s and M M. Merzenich portion of the one-dimensional receptor
surface of the cochlea. These stimuli consist of combinations of pure tones, all
with identical phase and all with 5 msec cosine shaped ramps in amplitude when
they individually turn on or off. Each chord was created by randomly selecting
frequency values from 84 possible values which span 7 octaves from 110Hz to 14080Hz
in even semitone steps. The density of tones in each stimulus was 1 tone per octave
on average, or 7 tones per chord, but the stimuli were selected stochastically
so a given chord could be composed of a variable number of tones of randomly selected
frequencies. We have used sampling rates of 10-100 chordssecond, and the data
here are from stimuli with 50 chordssecond. Stimuli with random, asynchronous
onset times of each tone produce similar results. These stimuli were presented
in the open sound field within an acoustical isolation cham ber at 44. 1kHz sampling
rate directly from audio compact disk, while the animal sat passively in the sound
field or actively performed an auditory discrimination task, receiving occasional
juice rewards. The complete characterization set lasted for ten minutes, thereby
including 30,000 individual chords. Spike trains were collected from mUltiple
sites in the cortex simultaneously during the presentation of our characterization
stimulus set, and individually reverse correlated with the times of onset of each
of the tonal stimuli. The reverse correlation method computes the number of spikes
from a neuron that were detected, on average, during a given time preceding, during,
or following a particular tonal stimulus component from our set of chords. These
values are presented in spikess for all of the tones in the stimulus set, and
for some range of time shifts. This method is somewhat analogous in intention
to a method developed earlier for deriving spectrotemporal receptive fields for
auditory midbrain neurons (Eggermont et al., 1983), but previous methods have
not been effective in the auditory cortex. 3 Results Figure 2 shows the spectrotemporal
responses of neurons from four locations in the primary auditory cortex. In each
panel, the time in milliseconds between the onset of a particular stimulus component
and a neuronal spike is shown along the horizontal axis. Progressively greater
negative time shifts indicate progressively longer latencies from the onset of
a stimulus component until the neuronal spikes. The frequency of the stimulus
component is shown along the vertical axis, in octave spacing from a 110Hz standard,
with twelve steps per octave. The brightness corresponds to the average rate of
the neuron, in spks, driven by a particular stimulus component . The reverse-correlogram
is thus presented as a stimulus triggered spike rate average, analogous to a standard
peristimulus time histogram but reversed in time, and is identical to the spectrogram
of the estimated optimal stimulus for the cell (a spike triggered stimulus average
which would be in units of mean stimulus denSity). A minority of neurons in the
primary auditory cortex have spectrotemporal recep tive fields that show only
a single region of increased rate, which corresponds to the traditional characteristic
frequency of the neuron, and no inhibitory region. We have found that cells of
this type (less than 10, not shown) are less common than cells with multimodal
receptive field structure. More commonly, neurons have regions of both increased
and decreased firing rate relative to their mean rate within their re ceptive
fields. For terminological convemence, these will be referred to as excitatory
and inhibitory regions, though these changes in rate are not diagnostic of an
under lying mechanism. Neurons with receptive fields of this type can serve as
detectors of stimulus edges in both frequency space, and in time. The neuron shown
in figure 2a has a receptive field structure indicative of lateral inhibition
in frequency space. This cell prefers a very narrow range of frequencies, and
decreases its firing rate for nearby frequencies, giving the characteristic of
a sharply-tuned bandpass filter. This Characterizing Auditory Cortical Neurons
Using Reverse Correlation 127 Figure 2: Spectrotemporal receptive fields of neurons
in the primary auditory cortex of the awake primate. These receptive fields are
computed as described in methods. Receptive field structures read from left to
right correspond to a preferred stimulus for the neuron, with light shading indicating
more probable stimulus components to evoke a spike, and dark shading indicating
less probable components. Receptive fields read from right to left indicate the
response of the neuron in time to a particular stimulus component. The colorbars
correspond to the average firing rates of the neurons in Hz at a given time preceding,
during, or following a particular stimulus component. type of response is the
auditory analog of a visual or tactile edge detector with lateral inhibition.
Simple cells in the primary visual cortex typically show similar patterns of center
excitation along a short linear segment, surrounded by inhibition (Jones and Palmer,
1987;Reid and Alonso, 1995; Reid et al., 1991). The neuron shown in figure 2b
shows a decrease in firing rate caused by a stimulus frequency which at a later
time causes an increase in rate. This receptive field structure is ideally suited
to detect stimulus transients; and can be thought of as a detector of temporal
edges. Neurons in the auditory cortex typically prefer this type of stimulus,
which is initially soft or silent and later loud. This corresponds to a neuronal
response which shows an increase followed by a decrease in firing rate. This is
again analogous to neuronal responses in the primary visual cortex, which also
typically show a firing rate pat tern to an optimal stimulus of excitation followed
by inhibition, and preference for stimulus transients such as when a stimulus
is first off and then comes on. The neuron shown in figures 2c shows an example
which has complex receptive field structure, with multiple regions. Cells of this
type would be indicative of selectiv ity for feature conjunctions or quite complex
stimuli, perhaps related to sounds in the animal''s learned environment. Cells
with complex receptive field structures are common in the awake auditory cortex,
and we are in the process of quantifying the percentages of cells that fit within
these different categories. Neurons were observed which respond with increased
rate to one frequency range at one time, and a different frequency range at a
later time, indicative of selectivity for frequency modulations(Suga, 1965). Regions
of decreased firing rate can show similar patterns. The neuron shown in figure
2d is an example of this type. This pattern is strongly analogous to motion energy
detectors in the visual system (Adelson and Bergen, 1985), which detect stimuli
moving in space, and these cells are selective for changes in frequency. 128 R.
C. deCharms and M M. Merzenich 2 octsec 6 octsec 10 octsec 14 octsec 30 octsec
100 octsec 2 octsec 6 octsec 10 octsec 14 octsec 30 octsec 100 octsec Figure 3:
Parametric stimulus set used to explore neuronal responses to continuously changing
stimulus frequency. Images axe spectrograms of stimuli from left to right in time,
and spanning seven octaves of frequency from bottom to top. Each stimulus is one
second. Numbers indicate the sweep rate of the stimuli in octaves per second.
Based on the responses shown, we wondered whether we could find a more optimal
class of stimuli for these neuron, analogous to the use of drifting bars or gratings
in the primary visual cortex. We have created auditory stimuli which correspond
exactly to the preferred stimulus computed for a paxticulax cell from the cell''s
spectrotemporal receptive field (manuscript in prepaxation), and we have also
designed a paxametric class of stimuli which are designed to be particularly effective
for neurons selective for stimuli of changing amplitude or frequency, which are
presented here. The stimuli shown in figure 3 are auditory analogous of visual
drifting grating stimuli. The stimuli axe shown as spectrograms, where time is
along the horizontal axis, frequency content on an octave scale is along the vertical
axis, and brightness corresponds to the intensity of the signal. These stimuli
contain frequencies that change in time along an octave frequency scale so that
they repeatedly pass approximately linearly through a neurons receptive field,
just as a drifting grating would pass repeatedly through the receptive field of
a visual neuron. These stimuli axe somewhat analogous to drifting ripple stimuli
which have recently been used by Kowalski, et.al. to characterize the linearity
of responses of neurons in the anesthetized ferret auditory cortex (Kowalski Neurons
in the auditory cortex typically respond to tonal stimuli with a brisk onset response
at the stimulus transient, but show sustained rates that axe far smaller than
found in the visual or somatosensory systems (deCharms and Merzenich, 1996a).
We have found neurons in the awake animal that respond with high firing rates
and significant selectivity to the class of moving stimuli shown in figure 3.
An outstanding example of this is shown in figure 4. The neuron in this example
showed a very high sustained firing rate to the optimal drifting stimulus, as
high as 60 Hz for one second. The neuron shown in this example also showed considerable
selectivity for stimulus velocity, as well as some selectivity for stimulus direction.
4 Conclusions These stimuli enable us to efficiently quantify the response characteristics
of neu rons in the awake primaxy auditory cortex, as well as producing optimal
stimuli for particular neurons. The data that we have gathered thus far extend
our knowledge about the complex receptive field structure of cells in the primary
auditory cortex, Cha racterizing Auditory Cortical Ne urons Using Reverse Correlation
129 2 octsec 6 octsec 10 octIsec 14 octsec 30 octsec 100 octsec -2 octsec -6 octsec
-10 octsec -14 octsec -30 octsec -100 octsec Figure 4: Responses of a neuron in
the primary auditory cortex of the awake pri mate to example stimuli take form
our characterization set, as shown in figure 3. In each panel, the average response
rate histogram in spikes per second is shown below rastergrams showing the individual
action potentials elicited on,each of twenty trials. and show some considerable
analogy with neurons in the primary visual cortex. In addition, they indicate
that it is possible to drive auditory cortical cells to high rates of sustained
firing, as in the visual cortex. This method will allow a number of future questions
to be addressed. Since we have recorded many neurons simultaneously, we are interested
in the interactions among large populations of neurons and how these relate to
stimuli. We are also recording responses to these stimuli while monkeys are performing
cognitive tasks involving attention and learning, and we hope that this will give
us insight into the effects on cell selectivity of the context provided by other
stimuli, the animal''s behavioral state or awareness of the stimuli, and the animal''s
prior learning of stimulus sets. 5 References Adelson EH, Bergen JR (1985) Spatiotemporal
energy models for the perception of Brosch M, Schreiner CE (1997) Time course
of forward masking tuning curves in cat primary auditory cortex. J Neurophysiol,
77, 923-43. deCharms Re, Merzenich MM (1996a) Primary cortical representation
of sounds by the coordination of action-potential timing. Nature, 381, 610-3.
deCharms RC , Merzenich MM (1996b) Primary cortical representation of sounds by
the coordination of action-potential timing. Nature, 381, 610-613. EggeI1I).ont
JJ, Aertsen AM, Johannesma PI (1983) Quantitative characterisation procedure for
auditory neurons based on the spectro-temporal receptive field. Hear Hubel DH,
Wiesel TN (1962) Receptive fields, binocular interaction and functional archtecture
in the cat''s visual cortex. J. Physiol., 160, 106-154. Jones JP, Palmer LA (1987)
The two-dimensional spatial structure of simple receptive 130 R. C. deCharms and
M M . Merzenich fields in cat striate cortex. J Neurophysiol, 58, 1187-211. Kowalski
N, Depireux DA, Shamma SA (1996a) Analysis of dynamic spectra in ferret primary
auditory cortex. I. Characteristics of single-unit responses to moving ripple
spectra. J Neurophysiol, 76, 3503-23. Kowalski N, Depireux DA, Shamma SA (1996b)
Analysis of dynamic spectra in fer ret primary auditory cortex. II. Prediction
of unit responses to arbitrary dynamic spectra. J Neurophysiol, 76, 3524-34. Merzenich
MM, Jenkins WM, Middlebrooks JC (1985) Observations and hypotheses on special
organizational features of the central auditory nervous system. In: Dy namic Aspects
of Neocortical Function. Edited by E. G. a. W. M . C. G. Edelman . Merzenich MM,
Knight PL, Roth GL (1973) Cochleotopic organization of primary auditory cortex
in the cat. Brain Res, 63, 343-6. Nelken I, Prut Y, Vaadia E, Abeles M (1994)
In search of the best stimulus: an optimization procedure for finding efficient
stimuli in the cat auditory cortex. Hear Phillips DP, Cynader MS (1985) Some neural
mechanisms in the cat''s auditory cortex underlying sensitivity to combined tone
and wide-spectrum noise stimuli. Hear Res, Reid RC, Alonso JM (1995) Specificity
of monosynaptic connections from thalamus to visual cortex. Nature, 378,281-4.
Reid RC, Soodak RE, Shapley RM (1991) Directional selectivity and spatiotemporal
structure of receptive fields of simple cells in cat striate cortex. J Neurophysiol,
66, Ringach DL, Hawken MJ, Shapley R (1997) Dynamics of orientation tuning in
macaque primary visual cortex. Nature, 387, 281-4. Schreiner CE, Mendelson JR
(1990) Functional topography of cat primary auditory cortex: distribution of integrated
excitation. J Neurophysiol, 64, 1442-59. Schreiner CE, Urbas JV (1988) Representation
of amplitude in the auditory cortex of the cat. II. Comparison between cortical
fields. Hear. Res., 32, 49-64. Shamma SA, Symmes D (1985) Patterns of inhibition
in auditory cortical cells in awake squirrel monkeys. Hear Res, 19, 1-13. Suga
N (1965) Responses of cortical auditory neurones to frequency modulated sounds
in echo-locating bats. Nature, 206, 890-l. Sutter ML, Schreiner CE (1995) Topography
of intensity tuning in cat primary au ditory cortex: single-neuron versus multiple-neuron
recordings. J Neurophysiol, 73, Wang X, Merzenich MM, Beitel R, Schreiner CE (1995)
Representation of a species specific vocalization in the primary auditory cortex
of the common marmoset: tem poral and spectral characteristics. J Neurophysiol,
74, 2685-706. Wollberg Z, Newman JD (1972) Auditory cortex of squirrel monkey:
response pat terns of single cells to species-specific vocalizations. Science,
175, 212-214.'
- source_sentence: Enhanced learning efficiency through input redundancy cancellation
in neural networks
sentences:
- 'INTRODUCTION Learning problems involving sequentially structured data cannot
be effectively dealt with static models such as feedforward networks. Recurrent
networks allow to model complex dynamical systems and can store and retrieve contextual
information in a flexible way. Up until the present time, research efforts of
supervised learning for recurrent networks have almost exclusively focused on
error minimization by gradient descent methods. Although effective for learning
short term memories, practical difficulties have been reported in training recurrent
neural networks to perform tasks in which the temporal contingencies present in
the inputoutput sequences span long intervals (Bengio et al., 1994; Mozer, 1992).
Previous work on alternative training algorithms (Bengio et al., 1994) could suggest
that the root of the problem lies in the essentially discrete nature of the process
of storing information for an indefinite amount of time. Thus, a potential solution
is to propagate, backward in time, targets in a discrete state space rather than
differential error information. Extending previous work (Bengio Frasconi, 1994a),
in this paper we propose a statistical approach to target propagation, based on
the EM algorithm. We consider a parametric dynamical system with discrete states
and we introduce a modular architecture, with subnetworks associated to discrete
states. The architecture can be interpreted as a statistical model and can be
trained by the EM or generalized EM (GEM) algorithms (Dempster et al., 1977),
considering the internal state trajectories as missing data. In this way learning
is decoupled into also, ATT Bell Labs, Holmdel, N J 07733 428 Yoshua Bengio, Paolo
Frasconi a temporal credit assignment subproblem and a static learning subproblem
that consists of fitting parameters to the next-state and output mappings defined
by the estimated trajectories. In order to iteratively tune parameters with the
EM or GEM algorithms, the system propagates forward and backward a discrete distribution
over the n states, resulting in a procedure similar to the Baum- Welch algorithm
used to train standard hidden Markov models (HMMs) (Levinson et al., 1983). HMMs
however adjust their parameters using unsupervised learning, whereas we use EM
in a supervised fashion. Furthermore, the model presented here could be called
InputOutput HMM , or IOHMM , because it can be used to learn to map input sequences
to output sequences (unlike standard HMMs, which learn the output sequence distribution).
This model can also be seen as a recurrent version of the Mixture of Experts architecture
(Jacobs et al., 1991), related to the model already proposed in (Cacciatore and
Nowlan, 1994). Experiments on artificial tasks (Bengio Frasconi, 1994a) have shown
that EM recurrent learning can deal with long term dependencies more effectively
than backpropaation through time and other alternative algorithms. However, the
model used in (Bengio Frasconi, 1994a) has very limited representational capabilities
and can only map an input sequence to a final discrete state. In the present paper
we describe an extended architecture that allows to fully exploit both input and
output portions of the data, as required by the supervised learning paradigm .
In this way , general sequence processing tasks, such as production, classification,
or prediction, can be dealt with. 2 THE PROPOSED ARCHITECTURE We consider a discrete
state dynamical system based on the following state space description: x - f(x
U ) where Ut E R m is the input vector at time t, Yt E R r is the output vector,
and Xt E {I, 2, ... , n} is a discrete state. These equations define a generalized
Mealy finite state machine, in which inputs and outputs may take on continuous
values. In this paper, we consider a probabilistic version of these dynamics,
where the current inputs and the current state distribution are used to estimate
the state distribution and the output distribution for the next time step. Admissible
state transitions will be specified by a directed graph 9 whose vertices correspond
to the model ''s states and the set of successors for state j is Sj. Th e system
defined by equations (1) can be modeled by the recurrent architecture depicted
in Figure l(a). The architecture is composed by a set of state networks N j, j 1
... n and a set of output networks OJ, j 1 ... n. Each one of the state and output
networks is uniquely associated to one of the states,and all networks share the
same input Ut . Each state network M has the task of predicting the next state
distribution, based on the current input and given that Xt-l j. Similarly, each
output network OJ predicts the output of the system, given the current state and
input. All the subnetworks are assumed to be static and they are defined by means
of smooth mappings Nj (Ut; 9j) and OJ (Ut; 1J j), where 9 j and 1J j are vectors
of adjustable parameters (e.g., connection weights). The ranges of the functions
N j 0 may be constrained in order to account for the underlying transition graph
9. Each output ''Pij,t of the state subnetwork N j (at time t) is associated to
one of the successors i of state j. Thus the last layer of M has as many units
as the cardinality of Sj. For convenience of notation, we suppose that ''Pij,t
are defined for each i, j 1, ... , n and we impose the condition ''Pij,t 0 for
each i not belonging to S j. The softmax function is used in the last layer: ''Pij,t e
a,j,t ILlEsj ea lj,t, j 1, ... , n , i E Sj where aij,t are intermediate variables
that can be thought of as the An Input Output HMM Architecture current pectod
output, given PIlat Input Mquenc. current atilt. dlatrtbutton IOHMM Figure 1:
(a): The proposed IOHMM architecture. (b): Bottom: Bayesian network expressing
conditional dependencies for an IOHMM; top: Bayesian network for a standard HMM
activations of the output units of subnetwork N j. In this way L:71 ''Pij,t 1
Tij,t. The vector ''t E R n represents the internal state of the model and it
is computed as a linear combination of the outputs of the state networks, gated
by the previously computed internal state: n output of the system 1Jt E R r :
Output networks compete to predict the global where 1Jjt E R r is the output of
subnetwork OJ. At this level, we do not need to further specify the internal architecture
of the state and output subnetworks. Depending on the task, the designer may decide
whether to include hidden layers and what activation rule to use for the hidden
units. This connectionist architecture can be also interpreted as a probability
model. Let us assume a multinomial distribution for the state variable Xt and
let us consider ''t, the main variable of the temporal recurrence (2). If we initialize
the vector ''0 to positive numbers summing to 1, it can be interpreted as a vector
of initial state probabilities. In general, we obtain relation (it P(Xt i I
un, having denoted with ui the subsequence of inputs from time 1 to t, inclusively.
Equation (2) then has the following probabilistic interpretation: i.e., the subnetworks
N j compute transition probabilities conditioned on the input As in neural networks
trained to minimize the output squared error, the output 1Jt of this architecture
can be interpreted as an expected "position parameter" for the probability distribution
of the output Yt. However, in addition to being conditional on an input Ut, this
expectation is also conditional on the state Xt, i.e. 430 Yoshua Bengio, Paolo
Frasconi 7]t E[Yt I Xt,Ut]. The actual form of the output density, denoted !Y(Yt;7]t),
will be chosen according to the task. For example a multinomial distribution is
suitable for sequence classification, or for symbolic mutually exclusive outputs.
Instead, a Gaussian distribution is adequate for producing continuous outputs.
In the first case we use a softmax function at the output of subnetworks OJ; in
the second case we use linear output units for the subnetworks O J. In order to
reduce the amount of computation, we introduce an independency model among the
variables involved in the probabilistic interpretation of the architecture. We
shall use a Bayesian network to characterize the probabilistic dependencies among
these variables. Specifically, we suppose that the directed acyclic graph 9 depicted
at the bottom of Figure 1 b is a Bayesian network for the dependency model associated
to the variables u I, xI, YI. One of the most evident consequences of this independency
model is that only the previous state and the current input are relevant to determine
the next-state. This one-step memory property is analogue to the Markov assumption
in hidden Markov models (HMM). In fact, the Bayesian network for HMMs can be obtained
by simply removing the Ut nodes and arcs from them (see top of Figure Ib). 3 A
SUPERVISED LEARNING ALGORITHM The learning algorithm for the proposed architecture
is derived from the maximum likelihood principle. The training data are a set
of P pairs of input output sequences (of length Tp): 1) {(uip(p),Yip(p));p 1
... P}. Let J denote the vector of parameters obtained by collecting all the parameters
(Jj and iJi of the architecture. The likelihood function is then given by The
output values (used here as targets) may also be specified intermittently. For
example, in sequence classification tasks, one may only be interested in the out
put YT at the end of each sequence. The modification of the likelihood to account
for intermittent targets is straightforward. According to the maximum likelihood
principle, the optimal parameters are obtained by maximizing (6). In order to
apply EM to our case we begin by noting that the state variables Xt are not ob
served. Knowledge of the model''s state trajectories would allow one to decompose
the temporal learning problem into 2n static learning subproblems. Indeed, if
Xt were known, the probabilities (it would be either 0 or 1 and it would be possible
to train each subnetwork separately, without taking into account any temporal
de pendency. This observation allows to link EM learning to the target propagation
approach discussed in the introduction. Note that if we used a Viterbi-like approxi
mation (i.e., considering only the most likely path), we would indeed have 2n
static learning problems at each epoch. In order to we derive the learning equations,
let us define the complete data as 1)c {(uiP(p),yiP(p),xiP(p));p 1 ... P}. The
corresponding complete data l-likelihood is Since lc( J; 1)c) depends on the hidden
state variables it cannot be maximized di rectly. The MLE optimization is then
solved by introducing the auxiliary function Q(J; 0) and iterating the following
two,steps for k 1, 2r ... :, Estimation: Compute Q(J; J) E[lc(J; 1)c) 1), J]
Maximization: Update the parameters as 0 t-arg maxJ Q( J; 0) (8) An Input Output
HMM Architecture 431 The expectation of (7) can be expressed as where hij,t EIZitzj,t-l
I uf, yf; 0J, denoting Zit for an indicator variable 1 if Xt i and 0 otherwise.
The hat in (it and hij,t means that these variables are computed using the "old"
parameters 0 . In order to compute hij,t we introduce the forward probabilities
Qit P(YL Xt i; uD and the backward probabilities f3it p(yf I Xt i, un, that
are updated as follows: Each iteration of the EM algorithm requires to maximize
Q(0 ; 0). We first consider a simplified case, in which the inputs are quantized
(i.e., belonging to a finite alphabet {0"1,"" O"K}) and the subnetworks behave
like lookup ta bles addressed by the input symbols O"t, i.e. we interpret each
parameter as W i''k P(Xt i I Xt-l j,O"t k). For simplicity, we restrict the
analysis to clas sification tasks and we suppose that targets are specified as
desired final states for each sequence. Furthermore, no output subnetworks are
used in this particular application of the algorithm. In this case we obtain the
reestimation formulae: In general, however, if the subnetworks have hidden sigmoidal
units, or use a soft max function to constrain their outputs to sum to one, the
maximum of Q cannot be found analytically. In these cases we can resort to a GEM
algorithm, that sim ply produces an increase in Q, for example by gradient ascent.
In this case, the derivatives of Q with respect to the parameters can be easily
computed as follows. Let Ojlt be a generic weight in the state subnetwork N j.
From equation (9): where the partial derivatives :e;t can be computed using backpropagation.
Sim ilarly, denoting with t''Jik a generic weight of the output subnetwork Oi,
we have: where ;;:t are also computed using backpropagation. Intuitively, the
parameters are updated as if the estimation step of EM had provided targets for
the outputs of the 2n subnetworks, for each time t. Although GEM algorithms are
also guaranteed to find a local maximum of the likelihood, their convergence may
be significantly slower compared to EM. In several experiments we noticed that
convergence can be accelerated with stochastic gradient ascent. 432 Yoshua Bengio,
Paolo Frasconi 4 COMPARISONS It appears natural to find similarities between the
recurrent architecture described so far and standard HMMs (Levinson et al., 1983).
The architecture proposed in this paper differs from standard HMMs in two respects:
computing style and learning. With IOHMMs, sequences are processed similarly to
recurrent networks, e.g., an input sequence can be synchronously transformed into
an output sequence. This computing style is real-time and predictions of the outputs
are available as the input sequence is being processed. This architecture thus
allows one to implement all three fundamental sequence processing tasks: production,
prediction, and classification. Finally, transition probabilities in standard
HMMs are fixed, i.e. states form a homogeneous Markov chain. In IOHMMs, transition
probabilities are conditional on the input and thus depend on time, resulting
in an inhomogeneous Markov chain. Consequently, the dynamics of the system (specified
by the transition probabilities) are not fixed but are adapted in time depending
on the input sequence. The other fundamental difference is in the learning procedure.
While interesting for their capabilities of modeling sequential phenomena, a major
weakness of stan dard HMMs is their poor discrimination power due to unsupervised
learning. An approach that has been found useful to improve discrimination in
HMMs is based on maximum mutual information (MMI) training. It has been pointed
out that supervised learning and discriminant learning criteria like MMI are actually
strictly related (Bridle, 1989). Although the parameter adjusting procedure we
have defined is based on MLE, yf is used as desired output in response to the
input uf, resulting in discriminant supervised learning. Finally, it is worth
mentioning that a number of hybrid approaches have been proposed to integrate
connectionist approaches into the HMM frame''Vork. For example in (Bengio et al.,
1992) the observations used by the HMM are generated by a feedforward neural network.
In (Bourlard and Wellekens, 1990) a feedforward network is used to estimate state
probabilities, con ditional to the acoustic sequence. A common feature of these
algorithms and the one proposed in this paper is that neural networks are used
to extract temporally local information whereas a Markovian system integrates
long-term constraints. We can also establish a link between IOHMMs and adaptive
mixtures of experts (ME) (Jacobs et al., 1991). Recently, Cacciatore Nowlan (1994)
have proposed a recurrent extension to the ME architecture, called mixture of
controllers (MC), in which the gating network has feedback connections, thus allowing
to take temporal context into account. Our IOHMM architecture can be interpreted
as a special case of the MC architecture, in which the set of state subnetworks
play the role of a gating network having a modular structure and second order
connections. 5 REGULAR GRAMMAR INFERENCE In this section we describe an application
of our architecture to the problem of grammatical inference. In this task the
learner is presented a set of labeled strings and is requested to infer a set
of rules that define a formal language. It can be considered as a prototype for
more complex language processing problems. However, even in the "simplest" case,
i.e. regular grammars , the task can be proved to be NP-complete (Angluin and
Smith, 1983). We report experimental results on a set of regular grammars introduced
by Tomita (1982) and afterwards used by other researchers to measure the accuracy
of inference methods based on recurrent networks (Giles et al., 1992; Pollack,
1991; Watrous and Kuhn , 1992). We used a scalar output with supervision on the
final output YT that was modeled as a Bernoulli variable fy (YT ; 7]T) 7]T (1
- 7] ) l-YT, with YT 0 if the string is rejected and YT 1 if it is accepted.
In tbis application we did not apply An Input Output HMM Architecture 433 Table
1: Summary of experimental results on the seven Tomita''s grammars. Grammar Sizes
Convergence Accuracies n FSA min Average Worst Best WK Best external inputs to
the output networks. This corresponds to modeling a Moore finite state machine
. Given the absence of prior knowledge about plausible state paths, we used an
ergodic transition graph (i.e., fully connected).In the experiments we measured
convergence and generalization performance using different sizes for the recurrent
architecture. For each setting we ran 20 trials with different seeds for the initial
weights. We considered a trial successful if the trained network was able to correctly
label all the training strings. The model size was chosen using a cross-validation
criterion based on performance on 20 randomly generated strings of length T ::;
12. For comparison, in Table 1 we also report for each grammar the number of states
of the minimal recognizing FSA (Tomita, 1982). We tested the trained networks
on a corpus of 213 - 1 binary strings of length T ::; 12. The final results are
summarized in Table 1. The column "Convergence" reports the fraction of trials
that succeeded to separate the training set. The next three columns report averages
and order statistics (worst and best trial) of the fraction of correctly classified
strings, measured on the successful trials. For each grammar these results refer
to the model size n selected by cross-validation. Generalization was always perfect
on grammars 1,4,5 and 6. For each grammar, the best trial also attained perfect
generalization. These results compare very favorably to those obtained with second-order
networks trained by gradient descent, when using the learning sets proposed by
Tomita. For comparison, in the last column of Table 1 we reproduce the results
reported by Watrous Kuhn (1992) in the best of five trials. In most of the successful
trials the model learned an actual FSA behavior with transition probabilities
asymptotically converging either to 0 or to 1. This renders trivial the extraction
of the corresponding FSA . Indeed, for grammars 1,4,5, and 6, we found that the
trained networks behave exactly like the minimal recognizing FSA . A potential
training problem is the presence of local maxima in the likelihood func tion.
For example, the number of converged trials for grammars 3, 4, and 5 is quite
small and the difficulty of discovering the optimal solution might become a serious
restriction for tasks involving a large number of states. In other experiments
(Ben gio Frasconi, 1994a), we noticed that restricting the connectivity of the
transition graph can significantly help to remove problems of convergence. Of
course, this ap proach can be effectively exploited only if some prior knowledge
about the state space is available. For example, applications of HMMs to speech
recognition always rely on structured topologies. 6 CONCLUSIONS There are still
a number of open questions. In particular, the effectiveness of the model on tasks
involving large or very large state spaces needs to be carefully eval uated. In
(Bengio Frasconi 1994b) we show that learning long term dependencies in these
models becomes more difficult as we increase the connectivity of the state 434
Yoshua Bengio, Paolo Frasconi transition graph. However, because transition probabilities
of IOHMMs change at each t, they deal better with this problem of long-term dependencies
than standard HMMs. Another interesting aspect to be investigated is the capability
of the model to successfully perform tasks of sequence production or prediction.
For example, interesting tasks that could also be approached are those related
to time series modeling and motor control learning. References Angluin, D. and
Smith, C. (1983). Inductive inference: Theory and methods. Com Bengio, Y. and
Frasconi, P. (1994a). Credit assignment through time: Alternatives to backpropagation.
In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information
Processing Systems 6. Morgan Kaufmann. Bengio, Y. and Frasconi, P. (1994b). An
EM Approach to Learning Sequential Behavior. Tech. Rep. RT-DSI11-94, University
of Florence. Bengio, Y., De Mori, R., Flammia, G., and Kompe, R. (1992). Global
optimization of a neural network-hidden markov model hybrid. IEEE Transactions
on Neural Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term
dependencies with gradient descent is difficult. IEEE Trans. Neural Networks,
5(2). Bourlard, H. and Wellekens, C. (1990). Links between hidden markov models
and multilayer perceptrons. IEEE Trans. Pattern An. Mach. Intell., 12:1167-1178.
Bridle, J. S. (1989). Training stochastic model recognition algorithms as net
works can lead to maximum mutual information estimation of parameters. In D .S.Touretzky,
ed., NIPS2, pages 211-217. Morgan Kaufmann. Cacciatore, T. W. and Nowlan, S. J.
(1994). Mixtures of controllers for jump linear and non-linear plants. In Cowan,
J. et. al., editors, Advances in Neural Information Processing Systems 6, San
Mateo, CA. Morgan Kaufmann. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977).
Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc.
B,39:1-38. Learning and extracting finite state automata with second-order recurrent
neu ral networks. Neural Computation, 4(3):393-405. Jacobs, R. A., Jordan, M.
I., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive mixture of local experts.
Neural Computation, 3:79-87. Levinson, S. E., Rabiner, L. R., and Sondhi, M. M.
(1983). An introduction to the application of the theory of probabilistic functIons
of a markov process to automatic speech recognition. Bell System Technical Journal,
64(4):1035-1074. Mozer, M. C. (1992). The induction of multiscale temporal structure.
In Moody, J. et. al., eds, NIPS 4 pages 275-282. Morgan Kaufmann. Pollack, J.
B. (1991). The induction of dynamical recognizers. Machine Learning, Tomita, M.
(1982). Dynamic construction of finite-state automata from examples using hill-climbing.
Proc. 4th Cog. Science Con!, pp. 105-108, Ann Arbor MI. Watrous, R. 1. and Kuhn,
G. M. (1992). Induction of finite-state languages using second-order recurrent
networks. Neural Computation, 4(3):406-414.'
- 'INTRODUCTION In many learning control problems, the evaluation used to modify
(and thus im prove) control may not be available in terms of the controller''s
output: instead, it may be in terms of a spatial transformation of the controller''s
output variables (in which case we shall term it as being "distal in space"),
or it may be available only several time steps into the future (termed as being
"distal in time"). For example, control of a robot arm may be exerted in terms
of joint angles, while evaluation may be in terms of the endpoint cartesian coordinates;
furthermore, we may only wish to evaluate the endpoint coordinates reached after
a certain period of time: the co- Current address: Computation and Neural Systems
Program, California Institute of Technology, Pasadena CA. 563 564 Brody ordinates
reached at the end of some motion, for instance. In such cases, supervised learning
methods are not directly applicable, and other techniques must be used. Here we
study one such technique (proposed for cases where the evaluation is distal in
both space and time by [Jordan Jacobs 90)), analyse a source of its problems,
and propose a simple solution for them which leads to fast, efficient learning.
We first describe two methods, and then combine them into the "predictive forward
modeling" technique with which we are concerned. 1.1 FORWARD MODELING "Forward
Modeling" [Jordan Rumelhart 90] is useful for dealing with evaluations which
are distal in space; it involves the construction of a differentiable model to
approximate the controller-action - evaluation transformation. Let our controller
have internal parameters w, output c, and be evaluated in space e, where e e(c)
is an unknown but well-defined transformation. If there is a desired output in
space e, called e, we can write an "error" function, that is, an evaluation we
wish minimised, and differentiate it w.r.t. the controller''s weights to obtain
Using a differentiable controller allows us to obtain the first factor in the
second equation, and the third factor is also known; but the second factor is
not. However, if we construct a differentiable model (called a ''''forward model")
of e(c), then we can obtain an approximation to the second term by differentiating
the model, and use this to obtain an estimate of the gradient 8E 8w through equation
(1); this can then be used for comparatively fast minimisation of E, and is what
is known as "forward modeling". 1.2 PREDICTIVE CRITICS To deal with evaluations
which are distal in time, we may use a "critic" network, as in [Barto, Sutton Anderson
83]. For a particular control policy implemented by the controller network, the
critic is trained to predict the final evaluation that will be obtained given
the current state - using, for example, Sutton''s TD algorithm [Sutton 88]. The
estimated final evaluation is then available as soon as we enter a state, and
so may in turn be used to improve the control policy. This approach is closely
related to dynamic programming [Barto, Sutton Watkins 89]. 1.3 PREDICTIVE FORWARD
MODELS While the estimated evaluation we obtain from the critic is no longer distal
in time, it may still be distal in space. A natural proposal in such cases, where
the evaluation signal is distal both in space and time, is to combine the two
techniques described above: use a differentiable model as a predictive critic
[Jordan Jacobs 90]. If we know the desired final evaluation, we can then proceed
as in equation (1) and obtain the gradient of the error w.r.t. the controller''s
weights. Schematically, this would look like figure 1. When using a backprop network
for the predictive model, state vector control CONTROLLER NETWORK Fast Learning
with Predictive Forward Models 565 predicted evaluation PREDICTIVE MODEL Figure
1: Jordan and Jacobs'' predictive forward modeling architecture. Solid lines indi
cate data paths, the dashed line indicates back propagation. we would backpropagate
through it, through it''s control input, and then into the controller to modify
the controller network. We should note that since predictions make no sense without
a particular control policy, and the controller is only modified through the predictive
model, both networks must be trained simultaneously. [Jordan Jacobs 90] applied
this method to a well-known problem, that of learn ing to balance an inverted
pendulum on a movable cart by exerting appropriate horizontal forces on the cart.
The same task, without differentiating the critic, was studied in [Barto, Sutton Anderson
83]. There, reinforcement learning methods were used instead to modify the controller''s
weights; these perform a search which in some cases may be shown to follow, on
average, the gradient of the expected evaluation w.r .t. the network weights.
Since differentiating the critic allows this gradient to be found directly, one
would expect much faster learning when using the architecture of figure 1. However,
Jordan and Jacobs'' results show precisely the opposite: it is surprisingly slow.
2 THE REDUNDANCY PROBLEM We can explain the above surprising result if we consider
the fact that the predictive model network has redundant inputs: the control vector
c is a function of the state vector; (call this c 17( S)). Let K. and u be the
number of components of the control and state vectors, respectively. Instead of
drawing its inputs from the entire volume of (K.u)-dimensional input space, the
predictor is trained only with inputs which lie on the u-dimensional manifold
defined by the relation 17. A way from the manifold the network is free to produce
entirely arbitrary outputs. Differentiation of the model will then provide non-arbitrary
gradients only for directions tangential to the manifold; this is a condition
that the axes of the control dimensions will not, in general, satisfy.l This observation,
which concerns any model trained with redundant inputs, is the very simple yet
principal point of this paper. One may argue that since the control policy is
continually changing, the redundancy picture sketched out here is not in fact
accurate: as the controller is modified, many lNote that if it is single-valued,
there is no way the manifold can "fold around" to cover all (or most) of the K. (T
input space. 566 Brody EVALUATION EVALUATION FUNCTION MODELS CONTROL OUTPUT Figure
2: The evaluation as a function of control action. Curves A,B,C,D represent possible
(wrong) estimates of the "real" curve made by the predictive model network. possible
control policies are "seen" by the predictor, so creating volume in input space
and leading to correct gradients obtained from the predictor. However, the way
in which this modification occurs is significant. An argument based on empirical
observations will be made to sustain this. Consider the example shown in figure
2. The graph shows what the "real" evaluation at some point in state space is,
as a function of a component of the control action taken at that pointj this function
is what the predictive network should approximate. Suppose the function implemented
by the predictive network initially looks like the curve which crosses the "real"
evaluation function at point (a)j suppose also that the current action taken also
corresponds to point (a). Here we see a one-dimensional example of the redundancy
problem: though the prediction at this point is entirely accurate, the gradient
is not. If we wish to minimise the predicted evaluation, we would change the action
in the direction of point (b). Examples of point (a) will no longer be presented
to the predictive network, so it could quite plausibly modify itself simply so
as to look like the estimated evaluation curve "B" which is shown crossing point
(b) (a minimal change necessary to continue being correct). Again, the gradient
is wrong and minimising the prediction will change the action in the same direction
as before, perhaps to point (c)j then to (d), and so on. Eventually, the prediction,
though accurate, will have zero gradient, as in curve "D", and no modifications
will occur. In practice, we have observed networks "getting stuck" in this fashion.
Though the objective was to minimise the evaluation, the system stops "learning"
at a point far from optimal. The problem may be solved, as Jordan and Jacobs did,
by introducing noise in the controller''s output, thus breaking the redundancy.
Unfortunately, this degrades .. [ vector control vector Fast Learning with Predictive
Forward Models 567 predicted predicted evaluation CONTROLLER NETWORK INTERMEDIATE
(WORLD) MODEL PREDICTIVE MODEL Figure 3: The proposed system architecture. Again,
solid lines represent data paths while the dashed line represents backpropagation
(or differentiation). signal quality and means that since we are predicting future
evaluations, we wish to predict the effects of future noise - a notoriously difficult
objective. The predictive network eventually outputs the evaluation''s expectation
value, but this can take a 3 USING AN INTERMEDIATE MODEL 3.1 AN EXTRA WORLD MODEL
Another way to solve the redundancy problem is through the use of what is here
called an "intermediate model": a model of the world the controller is interacting
with. That is, if 8(t) represents the state vector at time t, and c(t) the controller
output at time t, it is a model of the function 1 where 8(t 1) 1(8(t), c(t)).
This model is used as represented schematically in figure 3. It helps in modularising
the learning task faced by the predictive model [Chrisley 90], but more interestingly,
it need not be trained simultaneously with the controller since its output does
not depend on future control policy. Hence, it can be trained separately, with
examples drawn from its entire (state x action) input space, providing gradient
signals without arbitrary components when differentiated. Once trained, we freeze
the intermediate model''s weights and insert it into the system as in figure 3;
we then proceed to train the controller and predictive model as before. The predictive
model will no longer have redundant inputs when trained either, so it too will
provide correct gradient signals. Since all arbitrary components have been eliminated,
the speedup expected from using differentiable predictive models should now be
obtainable.2 3.2 AN EXAMPLE TASK The intermediate model architecture was tested
on the same example task as used by Jordan and Jacobs, that of learning to balance
a pole which is attached through a hinge on its lower end to a movable cart. The
control action is a real valued-force 2This same architecture was independently
proposed in [Werbos 90], but without the explanation as to why the intermediate
model is necessary instead of merely desirable. 568 Brody L.arninq trial Figure
4: The evolution of eight different learning networks, using the intermediate
model. applied to the cart; the evaluation signal is a "0" while the pole has
not fallen over, and the cart hasn''t reached the edge of the finite-sized tracks
it is allowed to move on, a "I" when either of these events happens. A trial is
then said to have failed, and terminates.3 We count the number of learning trials
needed before a controller is able to keep the pole balanced for a significant
amount of a time (measured in simulated seconds). Figure 4 shows the evolution
of eight networks; most reach balancing solutions within 100 to 300 faiulres.
(These successful networks came from a batch of eleven: the other three never
reached solutions.) This is 50 to 100 times faster than without the intermediate
model, where 5000 to 30000 trials were needed to achieve similar balancing times
[Jordan Jacobs 90]. We must now take into account the overhead needed to train
the intermediate model. This was done in 200 seconds of simulated time, while
training the whole system typically required some 400 seconds-the overhead is
small compared to the improvement achieved through the use of the intermediate
model. However, off-line training of the intermediate model requires an additional
agency to organise the selection and presentation of training examples. In the
real world, we would either need some device which could initialise the system
at any point in state space, or we would have to train through "flailing": applying
random control actions, over many trials, so as to eventually cover all possible
states and actions. As the dimensionality of the state representation rises for
larger problems, intermediate model training will become more difficult. 3The
differential equations which were used as a model of this system may be found
in [Barto, Sutton Anderson 83]. The parameters of the simulations were identical
to those used in [Jordan Jacobs 90]. Fast Learning with Predictive Forward Models
569 3.3 REMARKS We should note that the need for covering all state space is not
merely due to the requirement of training an intermediate model: dynamic-programming
based techniques such as the ones mentioned in this paper are guaranteed to lead
us to an optimal control solution only if we explore the entire state space during
learning. This is due to their generality, since no a priori structure of the
state space is assumed. It might be possible to interleave the training of the
intermediate model with the training of the controller and predictor networks,
so as to achieve both concurrently. High-dimensional problems will still be problematic,
but not just due to intermediate model training-the curse of dimensionality is
not easily avoided! 4 CONCLUSIONS If we differentiate through a model trained
with redundant inputs, we eliminate possible arbitrary components (which are due
to the arbitrary mixing of the inputs that the model may use) only if we differentiate
tangentially along the manifold defined by the relationship between the inputs.
For the architecture presented in [Jordan Jacobs 90], this is problematic, since
the axes of the control vector will typically not be tangential to the manifold.
Once we take this into account, it is clear why the architecture was not as efficient
as expected; and we can introduce an "intermediate" world model to avoid the problems
that it had. Using the intermediate model allows us to correctly obtain (through
backpropaga tion, or differentiation) a real-valued vector evaluation on the controller''s
output. On the example task presented here, this led to a 50 to 100-foid increase
in learn ing speed, and suggests a much better scaling-up performance and applicability
to real-world problems than simple reinforcement learning, where real-valued outputs
are not permitted, and vector control outputs would train very slowly. Acknowledgements
Many thanks are due to Richard Rohwer, who supervised the beginning of this project,
and to M. I. Jordan and R. Jacobs, who answered questions enlighteningly; thanks
are also due to Dr F. Bracho at lIMAS, UNAM, who provided the environ ment for
the project''s conclusion. This work was supported by scholarships from CON ACYT
in Mexico and from Caltech in the U.S. References [Ackley 88] D. H. Ackley, "Associative
Learning via Inhibitory Search", in D. S. Touretzky, ed., Advances in Neural Information
Processing Systems 1, Morgan Kaufmann 1989 [Barto, Sutton Anderson 83] A. G.
Barto, R. S. Sutton, and C. W. Anderson, "Neuronlike Adaptive Elements that can
Solve Difficult Control Problems", IEEE Transactions on Systems, Man, and Cybernetics,
Vol. SMC-13, No.5, [Barto, Sutton Watkins 89] A. G. Barto, R. S. Sutton, and
C. J. C. H. Watkins, "Learning and Sequential Decision Making", University of
Massachusetts at Amherst COINS Technical Report 89-95, September 1989 [Chrisley
90] R. L. Chrisley, "Cognitive Map Construction and Use: A Parallel Dis tributed
Approach", in Touretzky, Elman, Sejnowski, and Hinton, eds., Con nectionist Models:
Proceedings of the 1990 Summer School, Morgan Kaufmann [Jordan Jacobs 90] M.
I. Jordan and R. A. Jacobs, "Learning to Control an Un stable System with Forward
Modeling", in D. S. Touretzky, ed., Advances in Neural Information Processing
Systems 2, Morgan Kaufmann 1990 [Jordan Rumelhart 90] M. I. Jordan and D. E.
Rumelhart, "Supervised learning with a Distal Teacher" , preprint. [Nguyen Widrow
90] D. Nguyen and B. Widrow, ''''The Truck Backer-Upper: An Example of Self-Learning
in Neural Networks", in Miller, Sutton and Werbos, eds., Neural Networks for Control,
MIT Press 1990 [Sutton 88] R. S. Sutton, "Learning to Predict by the Methods of
Temporal Differ ences", Machine Learning 3: 9-44, 1988 [Werbos 90] P. Werbos,
"Architectures for Reinforcement Learning", in Miller, Sut ton and Werbos, eds.,
Neural Networks for Control, MIT Press 1990'
- 'Introduction Kernel machines have recently gained a lot of attention due to the
popularisation of the support vector machine (SVM) [13] with a focus on classification
and the revival of Gaussian Processes (GP) for regression [15]. Subsequently,
SVMs have been modified to handle regression [12] and GPs have been adapted to
the problem of classification [8]. Both schemes essentially work in the same function
space that is characterised by kernels (SVM) and covariance functions (GP), respectively.
While the formal similarity of the two methods is striking the underlying paradigms
of inference are very different. The SVM was inspired by results from statisticalPAC
learning theory while GPs are usually considered in a Bayesian framework. This
ideological clash can be viewed as a continuation in machine learning of the by
now classical disagreement between Bayesian and frequentistic statistics. With
regard to algorithmics the two schools of thought appear to favour two different
methods of learning and predicting: the SVM community - as a consequence of the
formulation of the SVM as a quadratic programming problem - focuses on learning
as optimisation while the Bayesian community favours sampling schemes based on
the Bayesian posterior. Of course there exists a strong relationship between the
two ideas, in particular with the Bayesian maximum a posteriori (MAP) estimator
being the solution of an optimisation problem. Interestingly, the two viewpoints
have recently been reconciled theoretically in the so-called PAC-Bayesian framework
[5] that combines the idea of a Bayesian prior with PAC-style performance guarantees
and has been the basis of the so far tightest margin bound for SVMs [3]. In practice,
optimisation based algorithms have the advantage of a unique, deterministic solution
and the availability of the cost function as an indicator for the quality of the
solution. In contrast, Bayesian algorithms based on sampling and voting are more
flexible and have the so-called "anytime" property, providing a relatively good
solution at any point in time. Often, however, they suffer from the computational
costs of sampling the Bayesian posterior. In this contribution we review the idea
of the Bayes point machine (BPM) as an approximation to Bayesian inference for
linear classifiers in kernel space in Section 2. In contrast to the GP viewpoint
we do not define a Gaussian prior on the length Ilwllx: of the weight vector.
Instead, we only consider weight vectors of length Ilwllx: 1 because it is only
the spatial direction of the weight vector that matters for classification. It
is then natural to define a uniform prior on the resulting ball shaped hypothesis
space. Hence, we determine the centre of mass ("Bayes point") of the resulting
posterior that is uniform in version space, i.e. in the zero training error region.
While the version space could be sampled using some form of Gibbs sampling (see,
e.g. [6] for an overview) or an ergodic dynamic system such as a billiard [4]
we suggest to use the perceptron algorithm trained on permutations of the training
set for sampling in Section 3. This extremely simple sampling scheme proves to
be efficient enough to make the BPM applicable to large data sets. We demonstrate
this fact in Section 4 on the well-known MNIST data set containing 60 000 samples
of handwritten digits and show how an approximation to the posterior probability
of classification provided by the BPM can even be used for test-point rejection
leading to a great reduction in generalisation error on the remaining samples.
We denote n-tuples by italic bold letters (e.g. x (Xl, ... ,xn )), vectors by
roman bold letters (e.g. x), random variables by sans serif font (e.g. X) and
vector spaces by calligraphic capitalised letters (e.g. X). The symbols P, E and
I denote a prob ability measure, the expectation of a random variable and the
indicator function, respectively. 2 Bayes Point Machines Let us consider the task
of classifying patterns X E X into one of the two classes y E Y {-1, 1} using
functions h : X Y from a given set 1t known as the hypothesis space. In this
paper we shall only be concerned with linear classifiers: where : X K i is
known I as the feature map and has to fixed beforehand. If all that is needed
for learning and classification are the inner products (., .)x: in the feature
space K, it is convenient to specify only by its inner product function 1 For
notational convenience we shall abbreviate cf (x) by x. This should not be confused
with the set x of training points. k : X X X -t IR known as the kernel, i.e. For
simplicity, let us assume that there exists a classifier2 w E W that labels all
This assumption can easily be relaxed by introducing slack variables as done in
the soft margin variant of the SVM. Then given a training set z (x, y) of m points
Xi together with their classes Yi assigned by hw'' drawn iid from an unknown data
distribution P z PYIXP X we can assume the existence of a version space V (z),
i.e. the set of all classifiers w E W consistent with z: In a Bayesian spirit
we incorporate all of our prior knowledge about w into a prior distribution Pw
over W. In the absence of any a priori knowledge we suggest a uniform prior over
the spatial direction of weight vectors w. Now, given the training set z we update
our prior belief by Bayes'' formula, i.e. ifwEV(Z) { otherwise where the first
line follows from the independence and the fact that x has no depen dence on w
and the second line follows from (2) and (3). The Bayesian classification of a
novel test point x is then given by Bay esz (x) argmaxyEy Pw1zm z ({hw (x) y})
sign (EWlzmz [hw (x)]) Unfortunately, the strategy Bayes z is in general not contained
in the set 1-l of classifiers considered beforehand. Since Pw1zmz is only non-zero
inside version space, it has been suggested to use the centre of mass w crn as
an approximation for Bayes z , i.e. This classifier is called the Bayes point.
In a previous work [4] we calculated Wcrn using a first order Markov chain based
on a billiard-like algorithm (see also [10]). We entered the version space V (z)
using a perceptron algorithm and started play ing billiards in version space V
(z) thus creating a sequence of pseudo-random samples Wi due to the chaotic nature
of the billiard dynamics. Playing billiards in V (z) is possible because each
training point (Xi, Yi) E z defines a hyperplane {w E W I Yi (Xi, w}JC O} W.
Hence, the version space is a convex polyhedron on the surface of W. After N bounces
of the billiard ball the Bayes point was estimated by 2We synonymously call h
E 11. and w E W a classifier because there is a one-to-one correspondence between
the two by virtue of (1). Although this algorithm shows excellent generalisation
performance when compared to state-of-the art learning algorithms like support
vector machines (SVM) [13], its effort scales like 0 (m2 ) and 0 (N . m 2 ) in
terms of memory and computational requirements, respectively. 3 Sampling the Version
Space Clearly, all we need for estimating the Bayes point (4) is a set of classifiers
W drawn uniformly from V (z). In order to save computational resources it might
be advan tageous to achieve a uniform sample only approximately. The classical
perceptron learning algorithm offers the possibility to obtain up to m! different
classifiers in ver sion space simply by learning on different permutations of
the training set. Given a permutation II : {I, ... , m} - {I, ... , m} the perceptron
algorithm works as follows: 1. Start with Wo 0 and t O. A classical theorem
due to Novikoff [7] guarantees the convergence of this procedure and furthermore
provides an upper bound on the number t of mistakes needed until convergence.
More precisely, if there exists a classifier WSVM with margin then the number
of mistakes until convergence - which is an upper bound on the sparsity of the
solution - is not more than R2 (x) y;2 (WSVM), where R (x) is the smallest real
number such that V x Ex: II (x) II K. :::; R (x). The quantity ''Y (WSVM) is
maximised for the solution WSVM found by the SVM, and whenever the SVM is theoretically
justified by results from learning theory (see [11, 13]) the ratio d R2 (x) ''Y;2
(WSVM) is considerably less than m, say d« m. Algorithmically, we can benefit
from this sparsity by the following "trick": since all we need to store is the
m-dimensional vector o. Furthermore, we keep track of the m-dimensional vector
0 of real valued outputs of the current solution at the i-th training point. By
definition, in the beginning 0 00. Now, if 0i :::; 0 we update Qi by Qi Yi and
update 0 by OJ OJ Yik (Xi, Xj) which requires only m kernel calculations. In
summary, the memory requirement of this algorithm is 2m and the number of kernel
calculations is not more than dm. As a consequence, the computational requirement
of this algorithm is no more than the computational requirement for the evaluation
ofthe margin ''Y (WSVM)! We suggest to use this efficient perceptron learning
algorithm in order to obtain samples Wi for the computation of the Bayes point
by (4). (a) (b) (c) Figure 1: (a) Histogram of generalisation errors (estimated
on a test set) using a kernel Gibbs sampler. (b) Histogram of generalisation errors
(estimated on a test set) using a kernel perceptron. (c) QQ plot of distributions
(a) and (b). The straight line indicates that both distribution are very similar.
In order to investigate the usefulness of this approach experimentally, we compared
the distribution of generalisation errors of samples obtained by perceptron learning
on permuted training sets (as suggested earlier by [14]) with samples obtained
by a full Gibbs sampling [2]. For computational reasons, we used only 188 training
patterns and 453 test patterns of the classes "I" and "2" from the MNIST data
set3 . In Figure 1 (a) and (b) we plotted the distribution over 1000 random samples
using Using a quantile-quantile (QQ) plot technique we can compare both distributions
in one graph (see Figure 1 (c)). These plots suggest that by simple permutation
of the training set we are able to obtain a sample of classifiers exhibiting the
same generalisation error distribution as with time-consuming Gibbs sampling.
4 Experimental Results In our large scale experiment we used the full MNIST data
set with 60000 training examples and 10000 test examples of 28 x 28 grey value
images of handwritten digits. As input vector x we used the 784 dimensional vector
of grey values. The images were labelled by one of the ten classes "0" to "I".
For each of the ten classes y {O, ... , 9} we ran the perceptron algorithm N 10
times each time labelling all training points of class y by 1 and the remaining
training points by -1. On an Ultra Sparc 10 each learning trial took approximately
20 - 30 minutes. For the classification of a test image x we calculated the real-valued
output of all 100 different classifiers5 by where we used the kernel k given by
(5). (Oi)j refers to the expansion coefficient corresponding to the i-th classifier
and the j-th data point. Now, for each of the 3 available at http:wvw .research.
att. comryannocrmnist. 4We decided to use this kernel because it showed excellent
generalisation performance when using the support vector machine. 5For notational
simplicity we assume that the first N classifiers are classifiers for the class
"0", the next N for class "1" and so on. rejection rate generalisation error rejection
rate Figure 2: Generalisation error as a function of the rejection rate for the
MNIST data set. The SVM achieved 1.4 without rejection as compared to 1.46 for
the BPM. Note that by rejection based on the real-valued output the generalisation
error could be reduced to 0.1 indicating that this measure is related to the probability
of misclassification of single test points. ten classes we calculated the real-valued
decision of the Bayes point Wy by In a Bayesian spirit, the final decision was
carried out by Note that ibp,y (x) [9] can be interpreted as an (unnormalised)
approximation of the posterior probability that x is of class y when restricted
to the function class (1). In order to test the dependence of the generalisation
error on the magnitude max y ibp,y (x) we fixed a certain rejection rate r E [0,1]
and rejected the set of r 10000 test points with the smallest value of max y ibp,y
(x). The resulting plot is depicted in Figure 2. As can be seen from this plot,
even without rejection the Bayes point has excellent generalisation performance6
. Furthermore, rejection based on the real-valued out put ibp (x) turns out to
be excellent thus reducing the generalisation error to 0.1. One should also bear
in mind that the learning time for this simple algorithm was comparable to that
of SVMs. A very advantageous feature of our approach as compared to SVMs are its
adjustable time and memory requirements and the "anytime" availability of a solution
due to sampling. If the training set grows further and we are not able to spend
more time with learning, we can adjust the number N of samples used at the price
of slightly worse generalisation error. 5 Conclusion In this paper we have presented
an algorithm for approximating the Bayes point by rerunning the classical perceptron
algorithm with a permuted training set. Here we 6Note that the best know result
on this data set if 1.1 achieved with a polynomial kernel of degree four. Nonetheless,
for reason of fairness we compared the results of both algorithms using the same
kernel. particularly exploited the sparseness of the solution which must exist
whenever the success of the SVM is theoretically justified. The restriction to
the zero training error case can be overcome by modifying the kernel as This technique
is well known and was already suggested by Vapnik in 1995 (see [1]). Another interesting
question raised by our experimental findings is the following: By how much is
the distribution of generalisation errors over random samples from version space
related to the distribution of generalisation errors of the up to m! different
classifiers found by the classical perceptron algorithm? Acknowledgements We would
like to thank Bob Williamson for helpful dis cussions and suggestions on earlier
drafts. Parts of this work were done during a research stay of both authors at
the ANU Canberra. References [1) C. Cortes and V. Vapnik. Support Vector Networks.
Machine Learning, 20:273-297, [2) T. Graepel and R. Herbrich. The kernel Gibbs
sampler. In Advances in Neural Information System Processing 13, 200l. [3) R.
Herbrich and T . Graepel. A PAC-Bayesian margin bound for linear classifiers:
Why SVMs work. In Advances in Neural Information System Processing 13, 200l. [4)
R. Herbrich, T . Graepel, and C. Campbell. Robust Bayes Point Machines. In Pro
[5) D. A. McAliester. Some PAC Bayesian theorems. In Proceedings of the Eleventh
An nual Conference on Computational Learning Theory, pages 230-234, Madison, Wis
[6) R. M. Neal. Markov chain monte carlo method based on ''slicing'' the density
function. Technical report, Department of Statistics, University of Toronto, 1997.
TR -9722. [7) A . Novikoff. On convergence proofs for perceptrons. In Report at
the Symposium on Mathematical Theory of Automata , pages 24-26, Politechnical
Institute Brooklyn, [8) M. Opper and O. Winther . Gaussian processes for classification:
Mean field algo rithms. Neural Computation, 12(11), 2000. [9) J. Platt. Probabilities
for SV machines. In Advances in Large Margin Classifiers, [10) P. Rujan and M
. Marchand . Computing the bayes kernel classifier. In Advances in Large Margin
Classifiers, pages 329-348. MIT Press, 2000. [11) J. Shawe-Taylor, P. L . Bartlett,
R. C. Williamson, and M . Anthony . Structural risk minimization over data-dependent
hierarchies. IEEE Transactions on Information [12) A. J. Smola. Learning with
Kernels. PhD thesis, Technische Universitat Berlin, 1998. [13) V. Vapnik. The
Nature of Statistical Learning Theory. Springer, 1995. [14) T. Watkin. Optimal
learning with a neural network. Europhysics Letters, 21:871-877, [15) C. Williams.
Prediction with Gaussian Processes: From linear regression to linear prediction
and beyond. Technical report, Neural Computing Research Group , Aston'
- source_sentence: Mathematical analysis of coarse-coded symbol memories in neural
networks
sentences:
- 'Introduction Measuring ways by which several neurons in the brain participate
in a specific computational task can shed light on fundamental neural information
processing mechanisms . While it is unlikely that complete information from any
macroscopic neural tissue will ever be available, some interesting insight can
be obtained from simultaneously recorded cells in the cortex of behaving animals.
The question we address in this study is the level of synergy, or the level of
cooperation, among brain cells, as determined by the information they provide
about the observed behavior of the animal. 1.1 The experimental data We analyze
simultaneously recorded units from behaving monkeys during a delayed response
behavioral experiment. The data was collected at the high brain function laboratory
of the Haddassah Medical School of the Hebrew universitY[l, 2]. In this task the
monkey had to remember the location of a visual stimulus and respond by touching
that location after a delay of 1-32 sec. Correct responses were rewarded by a
drop of juice. In one set of recordings six micro-electrodes were inserted simultaneously
to the frontal or prefrontal cortex[l, 3]. In another set of experiments the same
behavioral paradigm was used and recording were taken from the striatum - which
is the first station in basal ganglia (a sub-cortical ganglia)[2]. The cells recorded
in the striatum were the tonically active neurons[2], which are known to be the
cholinergic inter-neurons of the striatum. These cells are known to respond to
reward. The monkeys were trained to perform the task in two alternating modes
, "Go" and "N o-Go" [1]. Both sets of behavioral modes can be detected from the
recorded spike trains using several statistical modeling techniques that include
Hidden Markov Models (HMM) and Post Stimulus Histograms (PSTH). The details of
these detec tion methods are reported elsewhere[4, 5]. For this paper it is important
to know that we can significantly detect the correct behavior, for example in
the "Go" vs. the "No-Go" correct detection is achieved about 90 of the time, where
the random is 50 and the monkey''s average performance is 95 correct on this task.
2 Theoretical background Our measure of synergy level among cells is information
theoretic and was recently proposed by Brenner et. aZ. [6] for analysis of spikes
generated by a single neuron. This is the first application of this measure to
quantify cooperativity among neurons. 2.1 Synergy and redundancy A fundamental
quantity in information theory is the mutual information between two random variables
X and Y. It is defined as the cross-entropy (Kullbak-Liebler divergence) between
the joint distribution of the variables, p(x, y), and the product of the marginal
distributions p(x)p(y). As such it measures the statistical depen dence of the
variables X and Y. It is symmetric in X and Y and has the following Synergy and
Redundancy among Brain Cells of Behaving Monkeys 113 familiar relations to their
entropies[7]: When given three random variables X I, X 2 and Y, one can consider
the mutual information between the joint variables (XI,X2 ) and the variable Y,
I(XI'' X 2; Y) (notice the position of the semicolon), as well as the mutual infor
mations I(XI; Y) and I(X2; Y). Similarly, one can consider the mutual informa
tion between Xl and X 2 conditioned on a given value of Y y, I(XI; X21y) DKL[P(X
I,X2Iy)IP(Xl ly)P(X2Iy)]'' as well as its average, the conditional mutual information,
Following Brenner et. al.[6] we define the synergy level of Xl and X2 with respect
to the variable Y as with the natural generalization to more than two variables
X . This expression can be rewritten in terms of entropies and conditional information
as follows: Depends On Y Independent of Y When the variables exhibit positive
synergy value, with respect to the variable Y, they jointly provide more information
on Y than when considered independently, as expected in synergetic cases. Negative
synergy values correspond to redundancy - the variables do not provide independent
information about Y. Zero synergy value is obtained when the variables are independent
of Y or when there is no change in their dependence when conditioned on Y. We
claim that this is a useful measure of cooperativity among neurons, in a given
computational task. It is clear from Eq.( 3) that if since in that case L yP(y)Iy(XI;X2) I(XI;X2).
In other words, the synergy value is not zero only if the statistical dependence,
hence the mutual information between the variables, is affected by the value of
Y . It is positive when the mutual information increase, on the average, when
conditioned on Y, and negative if this conditional mutual information decrease.
Notice that the value of synergy can be both positive and negative since information,
unlike entropy, is not sub-additive in the X variables. 114 1. Gat and N Tishby
3 Synergy among neurons Our measure of synergy among the units is based on the
ability to detect the behavioral mode from the recorded activity, as we discuss
bellow. As discussed above, synergy among neurons is possible only if their statistical
dependence change with time. An important case where synergy is not expected is
pure "population coding" [8]. In this case the cells are expected to fire independently,
each with its own fixed tuning curve. Our synergy value can thus be used to test
if the recorded units are indeed participating in a pure population code of this
kind, as hypothesized for certain motor cortical activity. Theoretical models
of the cortex that clearly predict nonzero synergy include at tractor neural networks
(ANN)[9] and synfire chain models(SFC)[3]. Both these models predict changes in
the collective activity patterns, as neurons move between attractors in the ANN
case, or when different synfire-chains of activity are born or disappear in the
SFC case. To the extent that such changes in the collective activity depend on
behavior, nonzero synergy values can be detected. It remains an interesting theoretical
challenge to estimate the quantitative synergy values for such models and compare
it to observed quantities. 3.1 Time-dependent cross correlations In our previous
studies[4] we demonstrated, using hidden Markov models of the activity, that the
pairwise cross-correlations in the same data can change signifi cantly with time,
depending on the underlying collective state of activity. These states, revealed
by the hidden Markov model, in turn depend on the behavior and enable its prediction.
Dramatic and fast changes in the cross-correlation of cells has also been shown
by others[lO]. This finding indicate directly that the statistical dependence
of the neurons can change (rapidly) with time, in a way correlated to behavior.
This clearly suggests that nonzero synergy should be observed among these cortical
units, relative to this behavior. In the present study this theoretical hypothesis
is verified. 3.2 Redundancy cases If on the other hand the conditioned mutual
information equal zero for all behavioral modes, i.e. Iy(Xl; X2) 0 Vy E Y , while
I(Xl; X 2) 0, we expect to get negative synergy, or redundancy among the cells,
with respect to the behavior variable Y. We observed clear redundancy in another
part of the brain, the basal ganglia, dur ing the same experiment, when the behavior
was the pre-reward and post-reward activity. In this case different cells provide
exactly the same information, which yields negative synergy values. 4 Experimental
results 4.1 Synergy measurement in practice To evaluate the synergy value among
different cells, it is necessary to estimate the conditional distribution p(ylx)
where y is the current behavior and x represent a single trial of spike trains
of the considered cells. Estimating this probability, Synergy and Redundancy among
Brain Cells of Behaving Monkeys 115 however, requires an underlying statistical
model, or a represented of the spike trains. Otherwise there is never enough data
since cortical spike trains are never exactly reproducible. In this work we choose
the rate representation, which is the simplest to evaluate. The estimation of
p(ylx) goes as follows: For each of the M behavioral modes (Y1, Y2 .. , YM) collect
spike train samples (the tmining data set). Using the training sample, construct
a Post Stimulus Time Histogram (PSTH), i.e. the rate as function of time, for
each behavioral mode. Given a spike train, outside of the training set, compute
its probability to be result in each of the M modes. The spike train considered
correctly classified if the most probable mode is in fact the true behavioral
mode, and incorrectly otherwise. The fraction of correct classification, for all
spike trains of a given behavioral mode Yi, is taken as the estimate of P(Yi Ix),
and denoted pc., where Ci 1S the identity of the cells used in the computation.
For the case of only two categories of behavior and for a uniform distribution
of the different categories, the value of the entropy H(Y) is the same for all
combinations of cells, and is simply H (Y) - Ly p(y) log2 (p(y)) log22 1. The
full expression (in bits) for the synergy value can be thus written as follows:
If the first expression is larger than the second than there is (positive) synergy
and vice versa for redundancy. However there is one very important caveat. As
we saw the computation of the mutual information is not done exactly, and what
one really computes is only a lower bound . If the bound is tighter for multiple
cell calculation, the method could falsely infer positive synergy, and if the
bound is tighter for the single cell computation, the method could falsely infer
negative synergy. In previous works we have shown that the method we use for this
estimation is quite reasonable and robust[5], therefore, we believe that we have
even a conservative (i.e. less positive) estimate of synergy. 4.2 Observed synergy
values In the first set of experiments we tried to detect the behavioral mode
during the delay-period of correct trials. In this case the two types of behavior
were the "Go" and the "No-Go" described in the introduction. An example of this
detection problem is given in figure lAo In this figure there are 100 examples
of multi-electrode recording of spike trains during the delay period. On the left
is the "Go-mode" data and on the right the "No-Go mode", for two cells. On the
lower part there is an example of two single spike trains that need to be classified
by the mode models. 116 I. Gat and N. Tishby Figure 1: Raster displays of simultaneously
recorded cells in the 2 different areas, in each area there were 2 behavioral
modes. Table 1 gives some examples of detection results obtained by using 2 cells
indepen dently, and by using their joint combination. It can be seen that the
synergy is positive and significant. We examined 19 recording session of the same
behavioral modes for two different animals and evaluated the synergy value. In
18 out of the 19 sessions there was at least one example of significant positive
synergy among the cells. For comparison we analyzed another set of experiments
in which the data was recorded from the striatum in the basal ganglia. An example
for this detection is shown in figure lB. The behavioral modes were the "pre-reward"
vs. the "post reward" periods. Nine recording sessions for the two different monkeys
were exam ined using the same detection technique. Although the detection results
improve when the number of cells increase, in none of these recordings a positive
synergy value was found. For most of the data the synergy value was close to zero,
i.e. the mutual information among two cells jointly was close to the sum of the
mutual infor mation of the independent cells, as expected when the cells exhibit
(conditionally) independent activity. The prevailing difference between the synergy
measurements in the cortex and in the TAN s'' of the basal ganglia is also strengthen
by the different mechanisms underlying those cells. The TANs'' are assumed to
be globally mediators of information in the striatum, a relatively simple task,
whereas the information processed in the frontal cortex in this task is believed
to be much more collective and complicated. Here we suggest a first handle for
quantitative detection of such different neuronal activities. Acknowledgments
Special thanks are due to Moshe Abeles for his encouragement and support, and
to William Bialek for suggesting the idea to look for the synergy among cortical
cells. We would also like to thank A. Raz, Hagai Bergman, and Eilon Vaadia for
sharing their data with us. The research at the Hebrew university was supported
in part by a grant from the Unites States Israeli Binational Science Foundation
(BSF). Synergy and Redundancy among Brain Cells of Behaving Monkeys 117 Table
1: Examples of synergy among cortical neurons. For each example the mutual information
of each cell separately is given together with the mutual information of the pair.
In parenthesis the matching detection probability (average over p(ylx)) is also
given. The last column gives the percentage of increase from the mutual information
of the single cells to the mutual information of the pair. The table gives only
those pairs for which the percentage was larger than 20 and the detection rate
higher than 60. Session Cells CellI Ce1l2 Both cells Syn () References [1] M.
Abeles, E. Vaadia, H. Bergman, Firing patterns of single unit in the pre frontal
cortex and neural-networks models., Network 1 (1990). [2] E. Raz , et al Neuronal
synchronization of tonically active neurons in the striatum of normal and parkinsonian
primates, J. Neurophysiol. 76:2083-2088 [3] M. Abeles, Corticonics, (Cambridge
University Press, 1991). [4] I. Gat , N. Tishby and M. Abeles, Hidden Markov modeling
of simultaneously recorded cells in the associative cortex of behaving monkeys,
Network,8:297-322 [5] I. Gat, N. Tishby, Comparative study of different supervised
detection methods of simultaneously recorded spike trains, in preparation. [6]
N. Brenner, S.P. Strong, R. Koberle, W. Bialek, and R. de Ruyter van Steveninck,
The Economy of Impulses and the Stiffnes of Spike Trains, NEC Research Institute
Technical Note (1998). [7] T.M . Cover and J.A. Thomas, Elements of Information
Theory., (Wiley NY, [8] A.P. Georgopoulos, A.B. Schwartz, R.E. Kettner, Neuronal
Population Coding [9] D.J. Amit, Modeling Brain Function, (Cambridge University
Press, 1989). [10] E. Ahissar et al Dependence of Cortical Plasticity on Correlated
Activity of Single Neurons and on Behavioral Context, Science, 257:1412-1415 (1992).'
- 'Introduction A di8tributed repre8entation is a memory scheme in which each entity
(concept, symbol) is represented by a pattern of activity over many units [3].
If each unit participates in the representation of many entities, it is said to
be coar8ely tuned, and the memory itself is called a coar8e-coded memory. Coarse-coded
memories have been used for storing symbols in several neural network symbol processing
models, such as Touretzky and Hinton''s distributed connectionist production system
DCPS [8,9], Touretzky''s distributed implementation of linked list structures
on a Boltzmann machine, BoltzCONS [10], and St. John and McClelland''s PDP model
of case role defaults [6]. In all of these models, memory capacity was mea sured
empirically and parameters were adjusted by trial and error to obtain the desired
behavior. We are now able to give a mathematical foundation to these experiments
by analyzing the relationships among the fundamental memory parameters. There
are several paradigms for coarse-coded memories. In a feature-based repre- 8entation,
each unit stands for some semantic feature. Binary units can code features with
binary values, whereas more complicated units or groups of units are required
to code more complicated features, such as multi-valued properties or numerical
values from a continuous scale. The units that form the representation of a concept
define an intersection of features that constitutes that concept. Similarity between
concepts composed of binary Ceatures can be measured by the Hamming distance between
their representations. In a neural network implementation, relationships between
concepts are implemented via connections among the units forming their representations.
Certain types of generalization phenomena thereby emerge automatically. A different
paradigm is used when representing points in a multidimensional contin uous space
[2,3]. Each unit encodes values in some subset of the space. Typically the American
Institute of Physics 1988 653 subsets are hypercubes or hyperspheres, but they
may be more coarsely tuned along some dimensions than others [1]. The point to
be represented is in the subspace formed by the intersection of all active units.
AB more units are turned on, the accuracy of the representation improves. The
density and degree of overlap of the units'' receptive fields determines the system''s
resolution [7]. Yet another paradigm for coarse-coded memories, and the one we
will deal with exclusively, does not involve features. Each concept, or symbol,
is represented by an arbitrary subset of the units, called its pattern. Unlike
in feature-based representations, the units in the pattern bear no relationship
to the meaning of the symbol represented. A symbol is stored in memory by turning
on all the units in its pattern. A symbol is deemed present if all the units in
its pattern are active.l The receptive field of each unit is defined as the set
of all symbols in whose pattern it participates. We call such memories coarse
coded symbol memories (CCSMs). We use the term "symbol" instead of "concept" to
emphasize that the internal structure of the entity to be represented is not involved
in its representation. In CCSMs, a short Hamming distance between two symbols
does not imply semantic similarity, and is in general an undesirable phenomenon.
The efficiency with which CCSMs handle sparse memories is the major reason they
have been used in many connectionist systems, and hence the major reason for studying
them here. The unit-sharing strategy that gives rise to efficient encoding in
CCSMs is also the source of their major weakness. Symbols share units with other
symbols. AB more symbols are stored, more and more of the units are turned on.
At some point, some symbol may be deemed present in memory because all of its
units are turned on, even though it was not explicitly stored: a "ghost" is born.
Ghosts are an unwanted phenomenon arising out of the overlap among the representations
of the various symbols. The emergence of ghosts marks the limits of the system''s
capacity: the number of symbols it can store simultaneously and reliably. 2 Definitions
and Fundamental Parameters A coarse coded symbol memory in its most general form
consists of: A set of N binary state units. An alphabet of Q symbols to be represented.
Symbols in this context are atomic entities: they have no constituent structure.
A memory scheme, which is a function that maps each symbol to a subset of the
units - its pattern. The receptive field of a unit is defined as the set of all
symbols to whose pattern it belongs (see Figure 1). The exact nature of the lThis
criterion can be generalized by introducing a visibility threshold: a fraction
of the pattern that should be on in order for a symbol to be considered present.
Our analy sis deals only with a visibility criterion of 100, but can be generalized
to accommodate Figure 1: A memory scheme (N 6, Q 8) defined in terms of units
Us and symbols 8;. The columns are the symbols'' patterns. The rows are the units''
receptive fieldB. memory scheme mapping determines the properties of the memory,
and is the central target of our investigation. As symbols are stored, the memory
fills up and ghosts eventually appear. It is not possible to detect a ghost simply
by inspecting the contents of memory, since there is no general way of distinguishing
a symbol that was stored from one that emerged out of overlaps with other symbols.
(It is sometimes possible, however, to conclude that there are no ghosts.) Furthermore,
a symbol that emerged as a ghost at one time may not be a ghost at a later time
if it was subsequently stored into memory. Thus the definition of a ghost depends
not only on the state of the memory but also on its history. Some memory schemes
guarantee that no ghost will emerge as long as the number of symbols stored does
not exceed some specified limit. In other schemes, the emergence of ghosts is
an ever-present possibility, but its probability can be kept arbitrarily low by
adjusting other parameters. We analyze systems of both types. First, two more
bits of notation need to be introduced: Pghost: Probability of a ghost. The probability
that at least one ghost will appear after some number of symbols have been stored.
k: Capacity. The maximum number of symbols that can be stored simultaneously before
the probability of a ghost exceeds a specified threshold. If the threshold is
0, we say that the capacity is guaranteed. A localist representation, where every
symbol is represented by a single unit and every unit is dedicated to the representation
of a single symbol, can now be viewed as a special case of coarse-coded memory,
where k N Q and Pghost o. Localist representations are well suited for memories
that are not sparse. In these cases, coarse coded memories are at a disadvantage.
In designing coarse-coded symbol memories we are interested in cases where k «
N « Q. The permissible probability for a ghost in these systems should be low
enough so that its impact can be ignored. 655 3 Analysis of Four Memory Schemes
3.1 Bounded Overlap (guaranteed capacity) If we want to construct the memory scheme
with the largest possible a (given Nand k) while guaranteeing Pghost 0, the problem
can be stated formally as: Given a set of size N, find the largest collection
of subsets of it such that no union of k such subsets subsumes any other subset
in the collection. This is a well known problem in Coding Theory, in slight disguise.
Unfortunately, no complete analytical solution is known. We therefore simplify
our task and consider only systems in which all symbols are represented by the
same number of units (i.e. all patterns are of the same size). In mathematical
terms, we restrict ourselves to constant weight codes. The problem then becomes:
Given a set of size N, find the largest collection of subsets of size exactly
L such that no union of k such subsets subsumes any other subset in the collection.
There are no known complete analytical solutions for the size of the largest collection
of patterns even when the patterns are of a fixed size. Nor is any efficient procedure
for constructing such a collection known. We therefore simplify the problem further.
We now restrict our consideration to patterns whose pairwise overlap is bounded
by a given number. For a given pattern size L and desired capacity k, we require
that no two patterns overlap in more than m units, where: Memory schemes that
obey this constraint are guaranteed a capacity of at least k symbols, since any
k symbols taken together can overlap at most L - 1 units in the pattern of any
other symbol - one unit short of making it a ghost. Based on this constraint,
our mathematical problem now becomes: Given a set of size N, find the largest
collection of subsets of size exactly L such that the intersection of any two
such subsets is of size m (where m is given by equation 1.) Coding theory has
yet to produce a complete solution to this problem, but several methods of deriving
upper bounds have been proposed (see for example [4]). The simple formula we use
here is a variant of the Johnson Bound. Let abo denote the maximum number of symbols
attainable in memory schemes that use bounded overlap. Then The Johnson bound
is known to be an exact solution asymptotically (that is, when N, L, m - 00 and
their ratios remain finite). Since we are free to choose the pattern size, we
optimize our memory scheme by maximizing the above expression over all possible
values of L. For the parameter sub space we are interested in here (N 1000, k 50)
we use numerical approximation to obtain: (Recall that m is a function of Land
k.) Thus the upper bound we derived depicts a simple exponential relationship
between Q and Nk. Next, we try to construct memory schemes of this type. A Common
Lisp program using a modified depth-first search constructed memory schemes for
various parameter values, whose Q''S came within 80 to 90 of the upper bound.
These results are far from conclusive, however, since only a small portion of
the parameter space was tested. In evaluating the viability of this approach,
its apparent optimality should be con trasted with two major weaknesses. First,
this type of memory scheme is hard to construct computationally. It took our program
several minutes of CPU time on a Symbolics 3600 to produce reasonable solutions
for cases like N 200, k 5, m 1, with an exponential increase in computing time
for larger values of m. Second, if CC SMs are used as models of memory in naturally
evolving systems (such as the brain), this approach places too great a burden
on developmental mechanisms. The importance of the bounded overlap approach lies
mainly in its role as an upper bound for all possible memory schemes, subject
to the simplifications made earlier. All schemes with guaranteed capacities can
be measured relative to equation 3. 3.2 Random Fixed Size Patterns (a stochastic
approach) Randomly produced memory schemes are easy to implement and are attractive
because of their naturalness. However, if the patterns of two symbols coincide,
the guaranteed capacity will be zero (storing one of these symbols will render
the other a ghost). We therefore abandon the goal of guaranteeing a certain capacity,
and instead establish a tolerance level for ghosts, Pghost. For large enough memories,
where stochastic behavior is more robust, we may expect reasonable capacity even
with very small Pghost. In the first stochastic approach we analyze, patterns
are randomly selected subsets of a fixed size L. Unlike in the previous approach,
choosing k does not bound Q. We may define as many symbols as we wish, although
at the cost of increased probability of a ghost (or, alternatively, decreased
capacity). The probability of a ghost appearing after k symbols have been stored
is given by Equation 4: TN,L(k, e) is the probability that exactly e units will
be active after k symbols have been stored. It is defined recursively by Equation
5": We have constructed various coarse-coded memories with random fixed-size receptive
fields and measured their capacities. The experimental results show good agreement
with the above equation. The optimal pattern size for fixed values of N, k, and
a can be determined by binary search on Equation 4, since Pghost(L) has exactly
one maximum in the interval [1, N]. However, this may be expensive for large N.
A computational shortcut can be achieved by estimating the optimal L and searching
in a small interval around it. A good initial estimate is derived by replacing
the summation in Equation 4 with a single term involving E[e]: the expected value
of the number of active units after k symbols have been stored. The latter can
be expressed as: The estimated L is the one that maximizes the following expression:
An alternative formula, developed by Joseph Tebelskis, produces very good approx
imations to Eq. 4 and is much more efficient to compute. After storing k symbols
in memory, the probability P z that a single arbitrary symbol x has become a ghost
is given If we now assume that each symbol''s Pz is independent of that of any
other symbol, we obtain: This assumption of independence is not strictly true,
but the relative error was less than 0.1 for the parameter ranges we considered,
when Pghost was no greater than We have constructed the two-dimensional table
TN,L(k, c) for a wide range of (N, L) values (70 N 1000, 7 L 43), and produced
graphs of the relationships between N, k, a, and Pghost for optimum pattern sizes,
as determined by Equation 4. The 658 results show an approximately exponential
relationship between a and N k [5]. Thus, for a fixed number of symbols, the capacity
is proportional to the number of units. Let arlp denote the maximum number of
symbols attainable in memory schemes that use random fixed-size patterns. Some
typical relationships, derived from the data, are: 3.3 Random Receptors (a stochastic
approach) A second stochastic approach is to have each unit assigned to each symbol
with an independent fixed probability s. This method lends itself to easy mathematical
analysis, resulting in a closed-form analytical solution. After storing k symbols,
the probability that a given unit is active is 1 - (1 - s)k (independent of any
other unit). For a given symbol to be a ghost, every unit must either be active
or else not belong to that symbol''s pattern. That will happen with a probability
[1 - s . (1 - s)k] N, and thus the probability of a ghost is: Assuming Pghost
« 1 and k « a (both hold in our case), the expression can be simplified to: from
which a can be extracted: We can now optimize by finding the value of s that maximizes
a, given any desired upper bound on the expected value of Pghost. This is done
straightforwardly by solving BaBs o. Note that 8 N corresponds to L in the previous
approach. The solution is s l(k 1), which yields, after some algebraic manipulation:
A comparison of the results using the two stochastic approaches reveals an interesting
similarity. For large k, with Pghost 0.01 the term 0.468k of Equation 8 can be
seen as a numerical approximation to the log term in Equation 11, and the multiplicative
factor of 0.0086 in Equation 8 approximates Pghost in Equation 11. This is hardly
surprising, since the Law of Large Numbers implies that in the limit (N, k - 00,
with 8 fixed) the two methods are equivalent. 659 Finally, it should be. noted
that the stochastic approaches we analyzed generate a family of memory schemes,
with non-identical ghost-probabilities. Pghost in our formulas is therefore better
understood as an expected value, averaged over the entire family. 3.4 Partitioned
Binary Coding (a reference point) The last memory scheme we analyze is not strictly
distributed. Rather, it is somewhere in between a distributed and a localist representation,
and is presented for comparison with the previous results. For a given number
of units N and desired capacity k, the units are partitioned into k equal-size
"slots," each consisting of N k units (for simplicity we assume that k divides
N). Each slot is capable of storing exactly one symbol. The most efficient representation
for all possible symbols that may be stored into a slot is to assign them binary
codes, using the N k units of each slot as bits. This would allow 2N Jic symbols
to be represented. Using binary coding, however, will not give us the required
capacity of 1 symbol, since binary patterns subsume one another. For example,
storing the code ''10110'' into one of the slots will cause the codes ''10010'',
''10100'' and ''00010'' (as well as several other codes) to become ghosts. A possible
solution is to use only half of the bits in each slot for a binary code, and set
the other half to the binary complement of that code (we assume that Nk is even).
This way, the codes are guaranteed not to subsume one another. Let Qpbc denote
the number of symbols representable using a partitioned binary coding scheme.
Then, Once again, Q is exponential in N k. The form of the result closely resembles
the estimated upper bound on the Bounded Overlap method given in Equation 3. There
is also a strong resemblance to Equations 8 and 11, except that the fractional
multiplier in front of the exponential, corresponding to Pghost, is missing. Pghost
is 0 for the Parti tioned Binary Coding method, but this is enforced by dividing
the memory into disjoint sets of units rather than adjusting the patterns to reduce
overlap among symbols. As mentioned previously, this memory scheme is not really
distributed in the sense used in this paper, since there is no one pattern associated
with a symbol. Instead, a symbol is represented by anyone of a set of k patterns,
each N k bits long, corresponding to its appearance in one of the k slots. To
check whether a symbol is present, all k slots must be examined. To store a new
symbol in memory, one must scan the k slots until an empty one is found. Equation
12 should therefore be used only as a point of reference. 4 Measurement of DCPS
The three distributed schemes we have studied all use unstructured patterns, the
only constraint being that patterns are at least roughly the same size. Imposing
more com plex structure on any of these schemes may is likely to reduce the capacity
somewhat. In 660 Memory Scheme Result Bounded Overlap Qbo(N, k) eO.367t Random
Fixed-size Patterns Q,,!p(Pghost 0.01) 0.0086. e.468 r Random Receptors Q P
. eN .1og(k1)"''Tl((kl)"''Tlk"'') ,.,. - ghost Partitioned Binary Coding Qpbc
- eO.347r - Table 1 Summary of results for various memory schemes. order to quantify
this effect, we measured the memory capacity of DCPS (BoltzCONS uses the same
memory scheme) and compared the results with the theoretical models analyzed above.
DCPS'' memory scheme is a modified version of the Random Receptors method [5].
The symbol space is the set of all triples over a 25 letter alphabet. Units have
fixed-size receptive fields organized as 6 x 6 x 6 subspaces. Patterns are manipulated
to minimize the variance in pattern size across symbols. The parameters for DCPS
are: N 2000, deviation of 1.5. When Pghost 0.01 the measured capacity was k 48
symbols. By substituting for N in Equation 11 we find that the highest k value
for which Q,.,. 15625 is 51. There does not appear to be a significant cost for
maintaining structure in the receptive fields. 5 Summary and Discussion Table
1 summarizes the results obtained for the four methods analyzed. Some dif ferences
must be emphasiz''ed: Qbo and Qpbc deal with guaranteed capacity, whereas Q,.!p
and Q,.,. are meaningful only for Pghost O. Qbo is only an upper bound. Q,.!p
is based on numerical estimates. Qpbc is based on a scheme which is not strictly
coarse-coded. The similar functional form of all the results, although not surprising,
is aesthetically pleasing. Some of the functional dependencies among the various
parameters can be derived informally using qualitative arguments. Only a rigorous
analysis, however, can provide the definite answers that are needed for a better
understanding of these systems and their scaling properties. 661 Acknowledgments
We thank Geoffrey Hinton, Noga Alon and Victor Wei for helpful comments, and Joseph
Tebelskis for sharing with us his formula for approximating Pghost in the case
of fixed pattern sizes. This work was supported by National Science Foundation
grants IST-8516330 and EET-8716324, and by the Office of Naval Research under
contract number NOOO14-86- K-0678. The first author was supported by a National
Science Foundation graduate fellowship. References [1] Ballard, D H. (1986) Cortical
connections and parallel processing: structure and function. Behavioral and Brain
Sciences 9(1). [2] Feldman, J. A., and Ballard, D. H. (1982) Connectionist models
and their proper ties. Cognitive Science 6, pp. 205-254. [3] Hinton, G. E., McClelland,
J. L., and Rumelhart, D. E. (1986) Distributed repre sentations. In D. E. Rumelhart
and J. L. McClelland (eds.), Parallel Distributed Processing: Explorations in
the Microstructure of Cognition, volume 1. Cambridge, MA: MIT Press. [4] Macwilliams,
F.J., and Sloane, N.J.A. (1978). The Theory of Error-Correcting Codes, North-Holland.
[5] Rosenfeld, R. and Touretzky, D. S. (1987) Four capacity models for coarse-coded
symbol memories. Technical report CMU-CS-87-182, Carnegie Mellon University Computer
Science Department, Pittsburgh, PA. [6] St. John, M. F. and McClelland, J. L.
(1986) Reconstructive memory for sentences: a PDP approach. Proceedings of the
Ohio University Inference Conference. [7] Sullins, J. (1985) Value cell encoding
strategies. Technical report TR-165, Com puter Science Department, University
of Rochester, Rochester, NY. [8] Touretzky, D. S., and Hinton, G. E. (1985) Symbols
among the neurons: details of a connectionist inference architecture. Proceedings
of IJCAI-85, Los Angeles, CA, [9] Touretzky, D. S., and Hinton, G. E. (1986) A
distributed connectionist produc tion system. Technical report CMU-CS-86-172,
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA. [10]
Touretzky, D. S. (1986) BoltzCONS: reconciling connectionism with the recursive
nature of stacks and trees. Proceedings of the Eighth A nnual Conference of the
Cognitive Science Society, Amherst, MA, pp. 522-530.'
- 'INTRODUCTION 1.1 THE M.AUTHNER SYSTEM Much is known about the brainstem system
that controls fast-start escapes in teleost fish. The most prominent feature of
this network is the pair of large Mauthner cells whose axons cross the midline
and descend down the spinal cord to synapse on primary motoneurons. The Mauthner
system also includes inhibitory neurons, the PHP cells, which have a unique and
intense field effect inhibition at the spike initiating zone of the Mauthner cells
(Faber and Korn, 1978). The Mauthner system is part of the full brainstem escape
network which also includes two pairs of cells homologous to the Mauthner cell
and other populations of reticulospinal neurons. With this network fish initiate
escapes only from appropriate stimuli, turn away from the offending stimulus,
and do so very rapidly with a latency around 15 msec in goldfish. The Mauthner
cells play an important role in these functions. Only one 574 Directional Hearing
by the Mauthner System 575 fires thus controlling the direction of the initial
turn, and it fires very quickly (4-5 msec). They also have high thresholds due
to instrinsic membrane properties and the inhibitory inlluence of the PHP cells.
(For reviews, see Eaton, et al, 1991 and Faber and Korn, 1978.) Acoustic stimuli
are thought to be sufficient to trigger the response (Blader, 1981), both Mauthner
cells and PHP cells receive innervation from primary auditory fibers (Faber and
Korn, 1978). In addition, the Mauthner cells have been shown physio logically
to be very sensitive to acoustic pressure (Canfield and Eaton, 1990). 1.2 LOCALIZING
SOUNDS UNDERWATER In contrast to terrestrial vertebrates, there are several reasons
for supposing that fish do not use time of arrival or intensity differences between
the two ears to localize sounds: underwater sound travels over four times as fast
as in air; the fish body provides no acoustic shadow; and fish use a single transducer
to sense pressure which is conveyed equally to the two ears. Sound pressure is
transduced into vibrations by the swim bladder which, in goldfish, is mechanically
linked to the inner ear. Fish are sensitive to an additional component of the
acoustic wave, the particle motion. Any particle ofthe medium taking part in the
propagation of a longitudenal wave will oscillate about an equilibrium point along
the axis of propagation. Fish have roughly the same density as water, and will
experience these oscillations. The motion is detected by the bending of sensory
hairs on auditory receptor cells by the otolith, an inertial mass suspended above
the hair cells. This component of the sound will provide the axis of propagation,
but there is a 180 degree ambiguity. Both pressure and particle motion are sensed
by hair cells of the inner ear. In goldfish these signals may be nearly segregated.
The linkage with the swim bladder impinges primarily on a boney chamber containing
two of the endorgans of the inner ear: the saccule and the lagena. The utricle
is a third endorgan also thought to mediate some acoustic function, without such
direct input from the 3wimbladder. Using both of these components fish can localize
sounds. According to the phase model (Schuijf, 1981) fish analyze the phase difference
between the pressure com ponent of the sound and the particle displacement component
to calculate distance and direction. When pressure is increasing, particles will
be pushed in the direc tion of sound propagation, and when pressure is decreasing
particles will be pulled back. There will be a phase lag between pressure and
particle motion which varies with frequency and distance from the sound source.
This, and the separation of the pressure from the displacement signals in the
ear of some species pose the greatest problems for theories of sound localization
in fish. The acoustically triggered escape in goldfish is a uniquely tractable
problem in underwater sound localization. First, there is the fairly good segregation
of pressure from particle motion at the sensory level. Second I the escape is
very rapid. The decision to turn left or right is equivalent to the firing of
one or the other Mauthner cell, and this happens within about 4 msec. With transmission
delay, this decision relies only on the initial 2 msec or so of the stimulus.
For most salient frequencies, the phase lag will not introduce uncertainty: both
the first and second derivatives of particle position and acoustic pressure will
be either positive or negative. 576 Guzik and Eaton 1.3 THE XNOR MODEL Active
pressure input Left sound source Active displacement input No response Mauthner
output Right Mauthner output .. inhibitory 0- excitatory Figure 1 Truth table
and minimal network for the XNOR model. Given the above simplification of the
problem, we can see that each Mauthner cell must perform a logical operation (Guzik
and Eaton, 1993j Eaton et al, 1994). The left Mauthner cell should fire when sounds
are located on the left, and this occurs when either pressure is increasing and
particle motion is from the left or when pressure is decreasing and particle motion
is from the right. We can call displacement from the left positive for the left
Mauthner cell, and immediately we Directional Hearing by the Mauthner System 577
have the logical operator exclusive-nor (or XNOR). The right Mauthner cell must
solve the same problem with a redefinition of right displacement as positive.
The conditions for this logic gate are shown in figure 1A for both Mauthner cells.
This analysis simplifies our task of understanding the computational role of individual
elements in the system. For example, a minimal network could appear as in figure
In this model PHP units perform a logical sub-task of the XNOR as AND gates. This
model requires at least two functional classes of PHP units on each side of the
brain. These PHP units will be activated for the combinations of pressure and
displacement that indicate a sound coming from the wrong direction for the Mauthner
cell on that side. Both Mauthner cells are activated by sufficient changes in
pressure in either direction, high or low, and will be gated by the PHP cells.
This minimal model emerged from explorations of the system using the connectionist
paradigm, and inspired us to extend our efforts to a more realistic context. 2
THE NETWORK We used a connectionist model to explore candidate solutions to the
leftright dis crimination problem that include the populations of neurons known
to exist and include a distributed input resembling the sort available from the
hair cells of the inner ear. We were interested in generating a number of alternative
solutions to be better prepared to interpret physiological recordings from live
goldfish, and to look for variations of, or alternatives to, the XNOR model. 2.1
THE .ARCHITECTURE As shown in figure 2, there are four layers in the connectionist
model. The input layer consists of four pools of hair cell units. These represent
the sensory neurons of the inner ear. There are two pools on each side: the saccule
and the utricle. Treating only the horizontal plane, we have ignored the lagena
in this model. The saccule is the organ of pressure sensation and the utricle
is treated as the organ of particle motion. Each pool contains 16 hair cell units
maximally responsive for displacements of their sensory hairs in one particular
direction. They are activated as the eosine of the difference between their preferred
direction and the stimulus dellection. All other units use sigmoidal activation
functions. The next layer consists of units representing the auditory fibers of
the VIIIth nerve. Each pool receives inputs from only one pool of hair cell units,
as nerve fibers have not been seen to innervate more than one endorgan. There
are 10 units per fiber The fiber units provide input to both the inhibitory PHP
units, and to the Mauthner units. There are four pools of PHP units, two on each
side of the fish. One set on each side represents the collateral PHP eells, and
the other set represents the commissural PHP cells (Faber and Korn, 1978). Both
types receive inputs from the auditory fibers. The collaterals project only to
the Mauthner cell on the same side. The commissurals project to both Mauthner
cells. There are five units per PHP pool. 578 Guzik and Eaton The Mauthner cell
units receive inputs from saecular and utricular fibers on their same side only,
as well as inputs from a single collateral PHP population and both commissural
PHP populations. Left Saccule Left Utricle Right Utricle Right Saccule Hair Cells
Auditory Nerve Fiber Pools Left Mauthner Right Mautlll1er Figure 2 The architecture.
Weights from the PHP units are all constrained to be negative, while all others
are constrained to be positive. The weights are implemented using the function
below, positive or negative depending on the polarity of the weight. The function
asymptotes to zero for negative values, and to the identity function for values
above 2. This function vastly improved learning compared with the simpler, but
highly nonlinear exponential function used in earlier versions of the model. 2.2
TRAINING We used a total of 240 training examples. We began with a set of 24 directions
for particle motion, evenly distributed around 360 degrees. These each appeared
twice, once with increasing pressure and once with decreasing pressure, making
a base set of 48 examples. Pressure was introduced as a deflection across saccular
hair cells of either 0 degrees for low pressure, or 180 degrees for high pressure.
These should be thought of as reflecting the expansion or compression of the swim
bladder. Targets for the Mauthner cells were either 0 or 1 depending upon the
conditions as described in the XNOR model, in figure lA. Directional Hearing by
the Mauthner System 579 N ext by randomly perturbing the activations of the hair
cells for these 48 patterns, we generated 144 noisy examples. These were randomly
increased or decreased up to 10. An additional 48 examples were generated by dividing
the hair cell adivity by two to represent sub-threshold stimuli. These last 48
targets were set to zero. The network was trained in batch mode with backpropagation
to minimize a cross entropy error measure, using conjugate gradient search. Unassisted
backpropaga tion was unsuccessful at finding solutions. For the eight solutions
discussed here, two parameters were varied at the inputs. In some solutions the
utride was stimulated with a vedor sum of the displacement and the pressure components,
or a "mixed" input. In some solutions the hair cells in the utride are not distributed
uniformly, but in a gaussian manner with the mean tuning of 45 degrees to the
right or left, in the two ears respedively. This approximates the actual distribution
of hair cells in the goldfish utride (Platt, 1977). 3 RESULTS Analyzing the activation
of the hidden units as a fundion of input pattern we found activity consistent
with known physiology, nothing inconsistent with our knowledge of the system,
and some predidions to be evaluated during intracellular recordings from PHP cells
and auditory afFerents. First, many PHP cells were found exhibiting a logical
fUndion, which is consistent with our minimal model described above. These tended
to projed only to one Mauthner cell unit, which suggests that primarily the collateral
PHP cells will demonstrate logical properties. Most logical PHP units were NAND
gates with very large weights to one Mauthner cell. An example is a unit which
is on for all stimuli except those having displacements anywhere on the left when
pressure is Second, saccular fibers tended to be either sensitive to high or low
pressure, consis tent with recordings of Furukawa and Ishii (1967). In addition
there were a dass which looked like threshold fibers, highly active for all supra-threshold
stimuli, and inactive for all sub-threshold stimuli. There were some fibers with
no obvious se ledivity, as well. Third, utricular fibers often demonstrate sensitivity
for displacements exclusively from one side ofthe fish, consistent with our minimal
model. Right and left utricular fibers have not yet been demonstrated in the real
system. Utricular fibers also demonstrated more coarsely tuned, less interpretable
receptive fields. All solutions that included a mixed input to the utrieie, for
example, pro duced fibers that seemed to be "not 180 degree" ,or "not 0 degree",
countering the pressure vedors. We interpret these fibers as doing dean-up given
the absence of negative weights at that layer. Fourth, sub-threshold behavior
of units is not always consistent with their supra threshold behavior. At sub-threshold
levels of stimulation the adivity of units may not refted their computational
role in the behavior. Thus, intracellular recordings should explore stimulus ranges
known to elicit the behavior. 580 Guzik and Eaton Fifth, Mauthner units usually
receive very strong inputs from pressure fibers. This is consistent with physiological
recordings which suggest that the Mauthner cells in goldfish are more sensitive
to sound pressure than displacement (Canfield and Sixth, Mauthner cells always
acquired rdatively equal high negative biases. This is consistent with the known
low input resistance of the real Mauthner eells, giving them a high threshold
(Faber and Korn, 1978). Seventh, PHP cells that maintain substantial bilateral
connections tend to be ton ically active. These contribute additional negative
bias to the Mauthner cells. The relative sizes of the connections are often assymetric.
This suggests that the commis sural PHP cells serve primarily to regulate Mauthner
threshold, ensure behavioral response only to intense stimuli, consistent with
Faber and Korn (1978). These cells could only contribute to a partial solution
of the XNOR problem. Eighth, all solutions consistently used logic gate PHP units
for only 50 to 75 of the training examples. Probably distributed solutions relying
on the direct con nections of auditory nerve fibers to Mauthner cells were more
easily learned, and logic gate units only developed to handle the unsolved eases.
Cases solved without logic gate units were solved by assymetric projections to
the Mauthner cells of one polarity of pressure and one class of direction fibers,
left or right. Curiously, most of these eases involved a preferential projection
from high pressure fibers to the Mauthner units, along with directional fibers
encoding displacements from each Mauthner unit''s positive direction. This means
the logic gate units tended to handle the low pressure eases. This may be a result
of the presence of the assymetric distributions of utricular hair cells in 6 out
of the 8 solutions. 4 CONCLUSIONS Ve have generated predictions for the behavior
of neurons in the Mauthner system under different conditions of acoustic stimulation.
The predictions generated with our connectionist model are consistent with our
interpretation of the phase model for underwater sound localization in fishes
as a logical operator. The results are also consistent with previously described
properties of the Mauthner system. Though perhaps based on the characteristics
more of the training procedure, our solutions suggest that we may find a mixed
solution in the fish. Direct projections to the Mauthner cells from the auditory
nerve perhaps handle many of the commonly encountered acoustic threats. The results
of Blaxter (1981) support the idea that fish do escape from stimuli regardless
of the polarity of the initial pressure change. Without significant nonlinear
processing at the Mauthner cell itsdf, or more com plex processing in the auditory
fibers, direct connections could not handle all of these eases. These possibilities
deserve exploration. We propose different computational roles for the two classes
of inhibitory PHP neurons. We expect only unilaterally-projecting PHP cells to
demonstrate some logical function of pressure and particle motion. We believe
that some elements of the Mauthner system must be found to demonstrate such minimal
logical functions if the phase modd is an explanation for left-right discrimination
by the Mauthner system. Directional Hearing by the Mauthner System 581 We are
currently preparing to deliver controlled acoustic stimuli to goldfish during
acute intracellular recording procedures from the PHP neurons, the afferent fibers
and the Mauthner cells. Our insights from this model will greatly assist us in
designing the stimulus regimen, and in interpreting our experimental results.
Plans for future computational work are of a dynamic model that will include the
results of these physiological investigations, as well as a more realistic version
of the Mauthner .Acknowledgements We are grateful for the technical assistance
of members of the Boulder Connectionist Research Group, especially Don Mathis
for help in debugging and optimizing the original code. We thank P.L. Edds-Walton
for crucial discussions. This work was supported by a grant to RCE from the National
Institutes of Health (ROI NS22621). References Blader, J.H.S., J.A.B. Gray, and
E.J. Denton (1981) Sound and startle responses in herring shoals. J. Mar. BioI.
Assoc. UK, 61: 851-869 Canfield, J.G. and R.C. Eaton (1990) Swimbladder acoustic
pressure transduction intiates Mauthner-mediated escape. Nature, 37: 760-762 Eaton,
R.C., J.G. Canfield and A.L. Guzik (1994) Left-right discrimination of sound onset
by the Mauthner system. Brain Behav. Evol., in pre66 Eaton, R.C., R. DiDomenico
and J. Nissanov (1991) Role of the Mauthner cell in sensorimotor integration by
the brain stem escape network. Brain Behav. Evol., Faber, D.S. and H. Korn (1978)
Electrophysiology of the Mauthner cell: Basic properties, synaptic mechanisms
and associated networks. In Neurobiology of the Mauthner Cell, D.S. Faber and
H. Korn (eds) , Raven Press, NY, pp. 47-131 Fay, R.R.(1984) The goldfish ear codes
the axis of acoustic particle motion in three Furukawa, T. and Y. Ishii (1967)
Effects of static bending of sensory hairs on sound reception in the goldfish.
Japanese J. Physiol., 17: 572-588 Guzik, A.L. and R.C. Eaton (1993) The XNOR model
for directional hearing by the Mauthner system. Soc. Neurosci. Abstr. PIaU, C.
(1977) Hair cell distribution and orientation in goldfish otolith organs. J. Schuijf,
A. (1981) Models of acoustic localization. In Hearing and Sound Commu nication
in Fishes, W.N. Tavolga, A.N . Popper and R.R. Fay (eds.), Springer, New'
- source_sentence: Effect of input stimulus coding on self-supervised learning performance
sentences:
- 'INTRODUCTION Formal language learning (Gold, 1969) has been a topic of concern
for cognitive science and artificial intelligence. It is the task of inducing
a computational description of a formal language from a sequence of positive and
negative examples of strings in the target lan guage. Neural information processing
approaches to this problem involve the use of recur rent networks that embody
the internal state mechanisms underlying automata models (Cleeremans et aI., 1989;
Elman, 1990; Pollack, 1991; Giles et aI, 1992; Watrous Kuhn, 1992). Unlike traditional
automata-based approaches, learning systems relying on recurrent networks have
an additional burden: we are still unsure as to what these networks are doing.Some
researchers have assumed that the networks are learning to simulate finite state
machines (FSMs) in their state dynamics and have begun to extract FSMs from the
net works'' state transition dynamics (Cleeremans et al., 1989; Giles et al.,
1992; Watrous Kuhn, 1992). These extraction methods employ various clustering
techniques to partition the internal state space of the recurrent network into
a finite number of regions correspond ing to the states of a finite state automaton.
This assumption of finite state behavior is dangerous on two accounts. First,
these extrac tion techniques are based on a discretization of the state space
which ignores the basic def inition of information processing state. Second, discretization
can give rise to incomplete computational explanations of systems operating over
a continuous state space. SENSITIVITY TO INITIAL CONDITIONS In this section, I
will demonstrate how sensitivity to initial conditions can confuse an FSM extraction
system. The basis of this claim rests upon the definition of information processing
state. Information processing (lP) state is the foundation underlying automata
theory. Two IP states are the same if and only if they generate the same output
responses for all possible future inputs (Hopcroft Ullman, 1979). This definition
is the fulcrum for many proofs and techniques, including finite state machine
minimization. Any FSM extraction technique should embrace this definition, in
fact it grounds the standard FSM minimization methods and the physical system
modelling of Crutchfield and Young (Crutchfield Young, 1989). Some dynamical
systems exhibit exponential divergence for nearby state vectors, yet remain confined
within an attractor. This is known as sensitivity to initial conditions. If this
divergent behavior is quantized, it appears as nondeterministic symbol sequences
(Crutch field Young, 1989) even though the underlying dynamical system is completely
deter ministic (Figure 1). Consider a recurrent network with one output and three
recurrent state units. The output unit performs a threshold at zero activation
for state unit one. That is, when the activation of the first state unit of the
current state is less than zero then the output is A. Otherwise, the output is
B. Equation 1 presents a mathematical description. Set) is the current state of
the system 0 (t) is the current output. Figure 2 illustrates what happens when
you run this network for many iterations. The point in the upper left hand state
space is actually a thousand individual points all within a ball of radius 0.01.
In one iteration these points migrate down to the lower corner of the state space.
Notice that the ball has elongated along one dimension. After ten iterations the
orig inal ball shape is no longer visible. After seventeen, the points are beginning
to spread along a two dimensional sheet within state space. And by fifty iterations,
we see the net work reaching the its full extent of in state space. This behavior
is known as sensitivity to initial conditions and is one of three conditions which
have been used to characterize cha otic dynamical systems (Devaney, 1989). In
short, sensitivity to initial conditions implies Fool''s Gold: Extracting Finite
State Machines from Recurrent Network Dynamics 503 Figure 1: Examples of deterministic
dynamical systems whose discretize trajectories appear nondeterministic. that
any epsilon ball on the attractor of the dynamical will exponentially diverge,
yet still be contained within the locus of the attractor. The rate of this divergence
is illustrated in Figure 3 where the maximum distance between two points is plotted
with respect to the number of iterations. Note the exponential growth before saturation.
Saturation occurs as the point cloud envelops the attractor. No matter how small
one partitions the state space, sensitivity to initial conditions will eventually
force the extracted state to split into multiple trajectories independent of the
future input sequence. This is characteristic of a nondeterministic state transition.
Unfortu nately, it is very difficult, and probably intractable, to differentiate
between a nondetermin istic system with a small number of states or a deterministic
with large number of states. In certain cases, however, it is possible to analytically
ascertain this distinction (Crutchfield THE OBSERVERS'' PARADOX One response to
this problem is to evoke more computationally complex models such as push-down
or linear-bounded automata. Unfortunately, the act of quantization can actually
introduce both complexion and complexity in the resulting symbol sequence. Pollack
and I have focused on a well-hidden problems with the symbol system approach to
understand ing the computational powers of physical systems. This work (Kolen Pollack,
1993; S04 Kolen outputA 1 Start (eO.Ol) outputA,B 1 17 iterations outputB 1 1
iteration outputA,B 1 25 iterations outputA 1 10 iterations 50 iterations Figure
2: The state space of a recurrent network whose next state transitions are sensitive
to initial conditions. The initial epsilon ball contains 1000 points. These points
first straddle the output decision boundary at iteration seven. Kolen Pollack,
In press) demonstrated that computational complexity, in terms of Chom sky''s
hierarchy of formal languages (Chomsky, 1957; Chomsky, 1965) and Newell and Simon''s
physical symbol systems (Newell Simon, 1976), is not intrinsic to physical sys
tems. The demonstration below shows how apparently trivial changes in the partitioning
of state space can produce symbol sequences from varying complexity classes. Consider
a point moving in a circular orbit with a fixed rotational velocity, such as the
end of a rotating rod spinning around a fixed center, or imagine watching a white
dot on a spin ning bicycle wheel. We measure the location of the dot by periodically
sampling the loca tion with a single decision boundary (Figure 4, left side).
If the point is to the left of boundary at the time of the sample, we write down
an "1". Likewise, we write down an "r" when the point is on the other side. (The
probability of the point landing on the boundary is zero and can arbitrarily be
assigned to either category without affecting the results below.) In the limit,
we will have recorded an infinite sequence of symbols containing long sequences
of r''s and l''s. The specific ordering of symbols observed in a long sequence
of multiple rotations is Fool''s Gold: Extracting Finite State Machines from Recurrent
Network Dynamics 505 Figure 3: Spread of initial points across the attractor as
measured by maximum distance. Figure 4: On the left, two decision regions which
induce a context free language. 9 is the current angle of rotation. At the time
of sampling, if the point is to the left (right) of the dividing line, an 1 (r)
is generated. On the right, three decision regions which induce a context sensitive
language. dependent upon the initial rotational angle of the system. However,
the sequence does pos sess a number of recurring structural regularities, which
we call sentences: a run of r''s fol lowed by a run of l''s. For a fixed rotational
velocity (rotations per time unit) and sampling rate, the observed system will
generate sentences of the form r n1 m (n, m 0). (The notation rn indicates a
sequence of n r''s.) For a fixed sampling rate, each rotational velocity spec
ifies up to three sentences whose number of r''s and l''s differ by at most one.
These sen tences repeat in an arbitrary manner. Thus, a typical subsequence of
a rotator which produces sentences r n1 n, r n1 nl ,rn 11 n would look like 506
Kolen A language of sentences may be constructed by examining the families of
sentences gener ated by a large collection of individuals, much like a natural
language is induced from the abilities of its individual speakers. In this context,
a language could be induced from a pop ulation of rotators with different rotational
velocities where individuals generate sentences of the form {r"l n, r"l "1 ,r"ll"},
n O. The reSUlting language can be described by a context free grammar and has
unbounded dependencies; the number of 1 ''s is a function of the number of preceding
r''s. These two constraints on the language imply that the induced language is
context free. To show that this complexity class assignment is an artifact of
the observational mecha nism, consider the mechanism which reports three disjoint
regions: 1, c, and r (Figure 4, right side). Now the same rotating point will
generate sequences ofthe form For a fixed sampling rate, each rotational velocity
specifies up to seven sentences, rncffil k, when n, m, and k can differ no by
no more than one. Again, a language of sentences may be constructed containing
all sentences in which the number ofr''s, c''s, and l''s differs by no more than
one. The resulting language is context sensitive since it can be described by
a context sensitive grammar and cannot be context free as it is the finite union
of several context sensitive languages related to r"c"l n. CONCLUSION Using recurrent
neural networks as the representation underlying the language learning task has
revealed some inherent problems with the concept of this task. While formal languages
have mathematical validity, looking for language induction in physical systems
is question able, especially if that system operates with continuous internal
states. As I have shown, there are two major problems with the extraction of a
learned automata from our models. First, sensitivity to initial conditions produces
nondeterministic machines whose trajecto ries are specified by both the initial
state of the network and the dynamics of the state trans formation. The dynamics
provide the shape of the eventual attractor. The initial conditions specify the
allowable trajectories toward that attractor. While clustering methods work in
the analysis of feed-forward networks because of neighborhood preservation (as
each layer is a homeomorphism), they may fail when applied to recurrent network
state space trans formations. FSM construction methods which look for single transitions
between regions will not help in this case because the network eventually separates
initially nearby states across several FSM state regions. The second problem with
the extraction of a learned automata from recurrent network is that trivial changes
in observation strategies can cause one to induce behavioral descrip tions from
a wide range of computational complexity classes for a single system. It is the
researcher''s bias which determines that a dynamical system is equivalent to a
finite state automata. Fool''s Gold: Extracting Finite State Machines from Recurrent
Network Dynamics 507 One response to the first problem described above has been
to remove and eliminate the sources of nondeterminism from the mechanisms. Zeng
et. a1 (1993) corrected the second order recurrent network model by replacing
the continuous internal state transformation with a discrete step function. (The
continuous activation remained for training purposes.) This move was justified
by their focus on regular language learning, as these languages can be rec ognized
by finite state machines. This work is questionable on two points, however. First,
tractable algorithms already exist for solving this problem (e.g. Angluin, 1987).
Second, they claim that the network is self-clustering the internal states. Self-clustering
occurs only at the comers of the state space hypercube because of the discrete
activation function, in the same manner as a digital sequential circuit "clusters"
its states. Das and Mozer (1994), on the other hand, have relocated the clustering
algorithm. Their work focused on recurrent networks that perform internal clustering
during training. These networks operate much like competitive learning in feed-forward
networks (e.g. Rumelhart and Zipser, 1986) as the dynamics of the learning rules
constrain the state representations such that stable clusters emerge. The shortcomings
of finite state machine extraction must be understood with respect to the task
at hand. The actual dynamics of the network may be inconsequential to the final
prod uct if one is using the recurrent network as a pathway for designing a finite
state machine. In this engineering situation, the network is thrown away once
the FSM is extracted. Neural network training can be viewed as an "interior" method
to finding discrete solutions. It is interior in the same sense as linear programming
algorithms can be classified as either edge or interior methods. The former follows
the edges of the simplex, much like traditional FSM learning algorithms search
the space of FSMs. Internal methods, on the other hand, explore search spaces
which can embed the target spaces. Linear programming algorithms employing internal
methods move through the interior of the defined simplex. Likewise, recurrent
neural network learning methods swim through mechanisms with mUltiple finite state
interpretations. Some researchers, specifically those discussed above, have begun
to bias recurrent network learning to walk the edges (Zeng et al, 1993) or to
internally cluster states (Das Mozer, 1994). In order to understand the behavior
of recurrent networks, these devices should be regarded as dynamical systems (Kolen,
1994). In particular, most common recurrent networks are actually iterated mappings,
nonlinear versions of Barnsley''s iterated function systems (Barnsley, 1988).
While automata also fall into this class, they are a specialization of dynamical
systems, namely discrete time and state systems. Unfortunately, information processing
abstractions are only applicable within this domain and do not make any sense
in the broader domains of continuous time or continuous space dynamical systems.
Acknowledgments The research reported in this paper has been supported by Office
of Naval Research grant number NOOOI4-92-J-1195. I thank all those who have made
comments and suggestions for improvement of this paper, especially Greg Saunders
and Lee Giles. References Angluin, D. (1987). Learning Regular Sets from Queries
and Counterexamples. Information 508 Kolen Barnsley, M. (1988). Fractals Everywhere.
Academic Press: San Diego, CA. Chomsky, N. (1957). Syntactic Structures. The Hague:
Mounton Co. Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, Mass.:
MIT Press. Cleeremans, A, Servan-Schreiber, D. McClelland, J. L. (1989). Finite
state automata and simple recurrent networks. Neural Computation, 1,372-381. Crutchfield,
J. Young, K. (1989). Computation at the Onset of Chaos. In W. Zurek, (Ed.), Entropy,
Complexity, and the Physics of Information. Reading: Addison-Wesely. Das, R. Mozer,
M. (1994) A Hybrid Gradient-DescentClustering Technique for Finite State Machine
Induction. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, (Eds.), Advances
in Neural Information Processing Systems 6. Morgan Kaufman: San Francisco. Devaney,
R. L. (1989). An Introduction to Chaotic Dynamical Systems. Addison-Wesley. Elman,
J. (1990). Finding structure in time. Cognitive Science, 14, 179-211. and Learning
an Unknown Grammar with Recurrent Neural Networks. In John E. Moody, Steven J.
Hanson Richard P. Lippman, (Eds.), Advances in Neural Information Processing
Systems 4. Morgan Kaufman. Gold, E. M. (1969). Language identification in the
limit. Information and Control, 10,372- Hopcroft, J. E. Ullman, J. D. (1979).
Introduction to Automata Theory, Languages, and Computation. Addison-Wesely. Kolen,
J. F. (1994) Recurrent Networks: State Machines or Iterated Function Systems?
In M. C. Mozer, P. Smolensky, D. S. Touretzky, J. L. Elman, AS. Weigend (Eds.),
Proceedings of the 1993 Connectionist Models Summer School. (pp. 203-210) Hillsdale,
NJ: Erlbaum Associates. Kolen, J. F. Pollack, J. B. (1993). The Apparent Computational
Complexity of Physical Systems. In Proceedings of the Fifteenth Annual Conference
of the Cognitive Science Society. Laurence Earlbaum. Kolen, J. F. Pollack, J.
B. (In press) The Observers'' Paradox: The Apparent Computational Complexity of
Physical Systems. Journal of Experimental and Theoretical Artificial Intelli gence.
Pollack, J. B. (1991). The Induction Of Dynamical Recognizers. Machine Learning,
7.227- Newell, A. Simon, H. A (1976). Computer science as empirical inquiry:
symbols and search. Communications of the Associationfor Computing Machinery,
19, 113-126. Rumelhart, D. E., and Zipser, D. (1986). Feature Discovery by Competitive
Learning. In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, (Eds.),
Parallel Distributed Processing. Volume 1. 151-193. MIT Press: Cambridge, MA Watrous,
R. L. Kuhn, G. M. (1992). Induction of Finite-State Automata Using Second Order
Recurrent Networks. In John E. Moody, Steven J. Hanson Richard P. Lippman, (Eds.),
Advances in Neural Information Processing Systems 4. Morgan Kaufman. Zeng, Z.,
Goodman, R. M., Smyth, P. (1993). Learning Finite State Machines With Self-Clus
tering Recurrent Networks. Neural Computation, 5, 976-990 PART IV NEUROSCIENCE'
- 'INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control.
Consider an agent moving around a finite grid such as the one in figure 1 (the
agent is incapable of crossing the barrier) trying to reach a goal whose position
it does not know. If it can predict how far away from the goal it is at the current
step, and how far away from the goal it is at the next step, after making a move,
then it can decide whether or not that move was helpful or harmful. If, in addition,
it can record this fact, then it can learn how to navigate to the goal. This generation
of actions from predictions is closely related to the mechanism of dynamical programming.
TD is used to learn the predictions in the first place. Consider the agent moving
around randomly on the grid, receiving a negative reinforcement of -1 for every
move it makes apart from moves which take it onto the goal. In this case, if it
can estimat.e from every location it visits, how much reinforcement (discounted
by how soon it arrives) it will get before it next reaches the goal, it will be
predicting how far away it is, based on the random method of selecting actions.
TD''s mechanism of learning is to force the predictions to be consistent; the
prediction from location a should be -1 more than the average of the predictions
from the locations that can be reached in one step (hence the extra -1 reinforcement)
from a. 464 Navigating Through Temporal Difference 465 If the agent initially
selects each action with the same probability, then the estimate of future reinforcement
from a will be monotonically related to how many steps a is away from the goal.
This makes the predictions useful for criticising actions as above. In practice,
the agent will modify its actions according to this criticism at the same time
as learning the predictions based on those actions. Barto, Sutton and Watkins
[2] develop this example, and show how the TD mech anism coupled with a punctate
representation of the stimulus (referred to as''RBsw below) finds the optimal
paths to the goal. ''RBsw ignores the cues shown in figure 1, and devotes one
input unit to each location on the grid, which fires if and only if the agent
is at that place. TD methods can however work with more general codes. Section
2 considers al ternative representations, including ones that are sensitive to
the orientation of the agent as it moves through the grid, and section 3 looks
at a restricted form of la. tent learning - what the agent can divine about its
environment in the absence of reinforcement. Both techniques can improve the speed
of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by
which the agent finds out from the environ ment where it is, can be classified
along two dimensions; whether they are punctate or distributed, and whether they
are directionally sensitive or in register with the world. Over most of the grid,
a ''sensible'' distributed representation, such as a coarse-coded one, would be
expected to make learning faster, as information about the value and action functions
could be shared across adjacent grid points. There are points of discontinuity
in the actions, as in the region above the right hand arm of the barrier, but
they are few. In his PhD thesis [9], Watkins considers a rather similar problem
to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on
a CMAC [1] coarse-coded representation of the space. Since his agent moves in
a continuous bounded space, rather than being confined merely to discrete grid
points, something of this sort is anyway essential. After the initial learning,
Watkins arbitrarily makes the agent move ten times more slowly in a closed section
of the space. This has a similar effect to the barrier in inducing a discontinuity
in the action space. Despite the CMACS forcing the system to share information
across such discontinuities, they were able to learn the task quickly. The other
dimension over which representations may vary involves the extent to which they
are sensitive to the direction in which the agent is facing. This is of interest
if the agent must construe its location from the cues around the grid. In this
case, rather than moving North, South, East or West, which are actions registered
with the world, the agent should only move Ahead, Left or Right (Behind is disabled
as an additional constraint), whose effects are also orientation dependent. This,
together with the fact that the representation will be less compact (it having
a larger input dimensionality) should make learning slower. Dynamical programming
and its equivalents are notoriously subject to Bellman''s curse of dimensionality,
an engineering equivalent of exponential explosion in search. Table 1 shows four
possible representations classified along these two dimensions. 466 Dayan Coarse
ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve ''RBSW ''RCMAC
Table 1: Representations. ''RBSW is the representation Barto, Sutton and Watkins
used. R,x is punctate and directionally sensitive - it devotes four units to every
grid point, one of which fires for each possible orientation of the agent. ''RcIAC''
the equivalent of Watkins'' representation, was not simulated, because its capabilities
would not differ markedly from those of the mapping-based representation developed
in the next section. nA is rather different from the other representations; it
provides a test of a represen tation which is more directly associated with the
sensory information that might be available directly from the cues. Figure 2 shows
how ''RA works. Various identifiable cues, C 1 ... C c (c 7 in the figure) are
scattered around the outside of the grid, and the agent has a fictitious ''retina''
which rotates with it. This retina is divided into a number of angular buckets
(8 in the figure), and each bucket has c units, the iSh one of which responds
if the cue Ci is visible in that bucket. This representation is clearly directionally
sensitive (if the agent is facing a different way, then so is its retina, and
so no cue will be visible in the same bucket as it was before), and also distributed,
since in general more than one cue will be visible from every location. Note that
there is no restriction on the number of units that can fire in each bucket at
any time - more than one will fire if more than one cue is visible there. Also,
under the present system ''RA will in general not work if its coding is ambiguous
- grid points must be distinguishable. Finally, it should be clear that ''RA is
not biologically plausible. Figure 3 shows the learning curves for the three representations
simulated. Each point is generated by switching off the learning temporarily after
a certain number of iterations, starting the agent from everywhere in the grid,
and averaging how many steps it takes in getting to the goal over and above the
minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly,
that ''RA is actually better than ''RBSW . This implies that the added advantage
of its distributed na ture more than outweighs its disadvantages of having more
components and being directionally sensitive. One of the motivations behind studying
alternate representations is the experimen tal findings on place cells in the
hippocampi of rats (amongst other species). These are cells that fire only when
the rat is at a certain location in its environment. Although their existence
has led to many hypotheses about rat cognitive mapping (see [5J for a substantial
discussion of place cells and mapping), it is important to note that even with
a map, there remains the computational1y intensive problem of navigation addressed,
in this paper, by TD. ''RA, being closely related to the input stimuli is quite
unlike a place cell code - the other representations all bear some similarities.
Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems
with the TD system as described is that it is incapable oflatent learning in the
absence of reinforcement or a goal. If the goal is just taken away, but the -1
reinforcements are still applied at each step, then the values assigned to each
location will tend to -00. If both are removed, then although the agent will wander
about its environment with random gay abandon, it will not pick up anything that
could be used to speed subsequent learning. Latent learning experiments with rats
in dry mazes prove fairly conclusively that rats running mazes in the absence
of rewards and punishments learn almost as much as rats that are reinforced. One
way to solve this problem is suggested by Sutton''s DYNA architecture [7]. Briefly,
this constructs a map of place x action - next place, and takes steps in the fictitious
world constructed from its map in-between taking steps in the real world, as a
way of ironing out the computational ''bumps'' (ie inconsistencies) in the value
and action functions. Instead, it is possible to avoid constructing a complete
map by altering the repre sentation of the environment used for learning the prediction
function and optimal actions. The section on representations concluded that coarse-coded
representations are generally better than punctate ones, since information can
be shared between neighbouring points. However, not all neighbouring points are
amenable to this sharing, because of discontinuities in the value and action functions.
If there were a way of generating a coarse coded representation (generally from
a punctate one) that is sensitive to the structure of the task, rather than arbitrarily
assigned by the environment, it should provide the base for faster learning still.
In this case, neighbouring points should only be coded together if they are not
separated by the barrier. The initial exploration would allow the agent to learn
this much about the structure of the environment. Consider a set of units whose
job is to predict the future discounted sum of firings of the raw input lines.
Using ''R.Bsw during the initial stage of learning when the act.ions are still
random, if the agent is at location (3,3) of the grid, say, then the discounted
prediction of how often it will be in (3,4) (ie the frequency with which the single
unit representing (3,4) will fire) will be high, since this location is close.
However, the prediction for (7,11) will be low, because it is very unlikely to
get there quickly. Consider the effect of the barrier: locations on opposite sides
of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric
on the grid, are far apart in the task. This means that the discounted prediction
of how often the agent will be at (1,6) given that it starts at (2,6), will be
proportionately lower. Overall, the prediction units should act like a coarse
code, sensitive to the struc ture of the task. As required, this information about
the environment is entirely independent of whether or not the agent is reinforced
during its exploration. In fact, the resulting ''map'' will be more accurate if
it is not, as its exploration will be more random. The output of the prediction
units is taken as an additional source of information for the value and action
functions. Since their main aim is to create intelligently distributed representations
from punc tate ones, it is only appropriate to use these prediction units for
''RBsw and ''R4X '' Figure 4 compares average learning curves for ''RBsw with
and without these ex-468 Dayan tra mapping units, and with and without 6000 steps
of latent learning (LL) in the absence of any reinforcement. A significant improvement
is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation!
after a few un-reinforced iterations. The predictions are clearly fairly well
developed and smooth - a predictable exponentially decaying hump. The only deviations
from this are at the barrier and along the edges, where the effects of impermeability
and immobility are apparent. Figure 6 shows the same set of predictions but after
2000 reinforced iterations, by which time the agent reaches the goal almost optimally.
The predictions degenerate from being roughly radially symmetric (bar the barrier)
to being highly asymmetric. Once the agent has learnt how to get to the goal from
some location, the path it will follow, and so the locations it will visit from
there, is largely fixed. The asymptotic values of the predictions will therefore
be 0 for units not on the path, and -( for those on the path, where r is the number
of steps since the agent''s start point and ''Y is the discounting factor weighting
immediate versus distant reinforcement. This is a severe limitation since it implies
that the topological information present in the early stages of learning disappears
evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION
Navigation comprises two problems; where the agent and the goals in its environ
ment are, and how it can get to them. Having some form of cognitive map, as is
suggested by the existence of place cells, addresses the first, but leaves open
the second. For the case of one goal, the simple TD method described here is one
solution. TD planning methods are clearly robust to changes in the way the input
stimu lus is represented. Distributed codes, particularly ones that allow for
the barrier, make learning faster. This is even true for 1lA'' which is sensitive
to the orientation of the agent. All these results require each location to have
a unique representa tion - Mozer and Bachrach [4] and Chrisley [3] and references
therein look at how ambiguities can be resolved using information on the sequence
of states the agent traverses. Since these TD planning methods are totally general,
just like dynamical program ming, they are unlikely to scale well. Some evidence
for this comes from the rel atively poor performance of 1l.x , with its quadrupled
input dimension. This puts the onus back either onto dividing the task into manageable
chunks, or onto more sophisticated representation. A cknow ledgements I am very
grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw,
and the attendees of the PDP Workshop at Edinburgh, the Con nectionist Group at
Amherst, and a spatial learning workshop at King''s College Cambridge for their
helpful comments. This work was funded by SERC. 1 Note that these are normalised
to a maximum value of 10, for graphical convenience. Navigating Through Temporal
Difference 469 References [1] Albus, JS (1975). A new approach to manipulator
control: the Cerebellar Model Articulation Controller (CMAC). Transactions of
the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227.
[2] Barto, AG, Sutton, RS . Watkins, CJCH (1989). Learning and Sequential Decision
Making. Technical Report 89-95, Computer and Information Science, University of
Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction
and use: A parallel dis tributed approach. In DS Touretzky, J Elman, TJ Sejnowski,
. GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School.
San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, . Bachrach, J (1990). Discovering
the structure of a reactive en vironment by exploration. In D Touretzky, editor,
Advances in Neurallnfor mation Processing Systems, , pp 439-446. San Mateo, CA:
Morgan Kaufmann. [5] O''Keefe, J Nadel, L (1978). The Hippocampus as a Cognitive
Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning
to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44.
[7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting
based on approximating dynamic progranuning. In Proceedings of the Seventh International
Conference on Machine Learning. San Mateo, CA: Morgan Kauf [8] Sutton, RS, . Barto,
AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel
. JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT
Press. [9J Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. Univer
sity of Cambridge, England. Agall arrier OriCIIlltloD ''Retina'' Anplar bucket
Dot rlrina 1. flrina Fig 2: The ''retina'' for 1lA 470 Dayan Average extra steps
to goal Learning iterations Fig 3: Different representations Fig 5: Initial predictions
from (5,6) Average extra steps to goal Learning iterations Fig 4: Mapping with
''RBSW Fig 6: Predictions after 2000 iterations'
- 'Introduction Hand-written digit recognition has become one of the touchstone
problems in neural networks recently. Large databases of training examples such
as the NIST (National Institute of Standards and Technology) Special Database
3 have become available, and real-world applications with clear practical value,
such as recognizing zip codes in letters, have emerged. Diverse architectures
with varying learning rules have been proposed, including feed-forward networks
(Denker et al. 1989; Ie Cun et al. 1990; Martin and Pittman 1990), self-organizing
maps (Allinson et al. 1994), and dedicated approaches such as the neocognitron
(Fukushima and Wake 1990). The problem is difficult because handwriting varies
a lot, some digits are easily confusable, and recognition must be based on small
but crucial differences. For ex ample, the digits 3 and 8, 4 and 9, and 1 and
7 have several overlapping segments, and the differences are often lost in the
noise. Thus, hand-written digit recogni tion can be seen as a process of identifying
the distinct features and producing an internal representation where the significant
differences are magnified, making the recognition easier. Laterally Interconnected
Self-organizing Maps in Handwritten Digit Recognition 737 In this paper, the Laterally
Interconnected Synergetically Self-Organizing Map ar chitecture (LISSOM; Sirosh
and Miikkulainen 1994, 1995, 1996) was employed to form such a separable representation.
The lateral inhibitory connections of the LIS SOM map decorrelate features in
the input, retaining only those differences that are the most significant. Using
LISSOM as a front end, the actual recognition can be performed by any standard
neural network architecture, such as the perceptron. The experiments showed that
while direct recognition of the digit bitmaps with a simple percept ron network
is successful 72.3 of the time, and recognizing them using a standard self-organizing
map (SOM) as the front end 84.1 of the time, the recognition rate is 88.1 based
on the LISSOM network. These results suggest that LISSOM can serve as an effective
front end for real-world handwritten character recognition systems. 2 The Recognition
System 2.1 Overall architecture The system consists of two networks: a 20 x 20
LISSOM map performs the feature analysis and decorrelation of the input, and a
single layer of 10 perceptrons the final recognition (Figure 1 (a)). The input
digit is represented as a bitmap on the 32 x 32 input layer. Each LISSOM unit
is fully connected to the input layer through the af ferent connections, and to
the other units in the map through lateral excitatory and inhibitory connections
(Figure 1 (b)). The excitatory connections are short range, connecting only to
the closest neighbors of the unit, but the inhibitory connections cover the whole
map . The percept ron layer consists of 10 units, corresponding to digits 0 to
9. The perceptrons are fully connected to the LISSOM map, receiv ing the full
activation pattern on the map as their input. The perceptron weights are learned
through the delta rule, and the LISSOM afferent and lateral weights through Hebbian
learning. 2.2 LISSOM Activity Generation and Weight Adaptation The afferent and
lateral weights in LISSOM are learned through Hebbian adapta tion. A bitmap image
is presented to the input layer, and the initial activity of the map is calculated
as the weighted sum of the input. For unit (i, j), the initial response TJij IS
where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight
connecting input unit ( a, b) to map unit (i, j), and (7 is a piecewise linear
approximation of the sigmoid activation function. The activity is then settled
through the lateral connections. Each new activity TJij (t) at step t depends
on the afferent activation and the lateral excitation and inhibition: where Eij,kl
and Iij,kl are the excitatory and inhibitory connection weights from map unit
(k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k, I) during the previous
time step. The constants Ie and Ii control the relative strength of the lateral
excitation and inhibition. After the activity has settled, the afferent and lateral
weights are modified according to the Hebb rule. Afferent weights are normalized
so that the length of the weight 738 Y. CHOE, J. SIROSH, R. MIIKKULAINEN Output
Layer (10) tII''d Units with excitatory lateral connections to (iJ) Units with
inhibitory lateral connections to (iJ) Figure 1: The system architecture. (a)
The input layer is activated according to the bitmap image of digit 6. The activation
propagates through the afferent connections to the LISSOM map, and settles through
its lateral connections into a stable pattern. This pattern is the internal representation
of the input that is then recognized by the perceptron layer. Through ,the connections
from LISSOM to the perceptrons, the unit representing 6 is strongly activated,
with weak activations on other units such as 3 and 8. (b) The lateral connections
to unit (i, j), indicated by the dark square, are shown. The neighborhood of excitatory
connections (lightly shaded) is elevated from the map for a clearer view. The
units in the excitatory region also have inhibitory lateral connections (indicated
by medium shading) to the center unit. The excitatory radius is 1 and the inhibitory
radius vector remains the same; lateral weights are normalized to keep the sum
of weights constant (Sirosh and Miikkulainen 1994): IllJ,mn - VLmn[llij,mn(t) crinp1]ijmnF''
(3) where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i,
j), and crinp is the input learning rate; Wij ,kl is the lateral weight (either
excitatory Eij ,kl or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and
cr is the lateral learning rate (either crexc or crinh). 2.3 Percept ron Output
Generation and Weight Adaptation The perceptrons at the output of the system receive
the activation pattern on the LISSOM map as their input. The perceptrons are trained
after the LISSOM map has been organized. The activation for the perceptron unit
Om is where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m
is the connection weight between LISSOM map unit (i,j) and output layer unit m.
The delta rule is used to train the perceptrons: the weight adaptation is proportional
to the map activity and the difference between the output and the target: where
crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map
unit activity, (m is the target activation for unit m. ((m 1 if the correct digit
m, 0 otherwise). Laterally Interconnected Self-organizing Maps in Handwritten
Digit Recognition 739 I Representation I Training Test Table 1: Final Recognition
Results. The average recognition percentage and its variance over the 10 different
splits are shown for the training and test sets. The differences in each set are
statistically significant with p .9999. 3 Experiments A subset of 2992 patterns
from the NIST Database 3 was used as training and testing data.1 The patterns
were normalized to make sure taht each example had an equal effect on the LISSOM
map (Sirosh and Miikkulainen 1994). LISSOM was trained with 2000 patterns. Of
these, 1700 were used to train the perceptron layer, and the remaining 300 were
used as the validation set to determine when to stop training the perceptrons.
The final recognition performance of the whole system was measured on the remaining
992 patterns, which neither LISSOM nor the perceptrons had seen during training.
The experiment was repeated 10 times with different random splits of the 2992
input patterns into training, validation, and testing sets. The LISSOM map can
be organized starting from initially random weights. How ever, if the input dimensionality
is large, as it is in case of the 32 X 32 bitmaps, each unit on the map is activated
roughly to the same degree, and it is difficult to bootstrap the self-organizing
process (Sirosh and Miikkulainen 1994, 1996). The standard Self-Organizing Map
algorithm can be used to preorganize the map in this case. The SOM performs preliminary
feature analysis of the input, and forms a coarse topological map of the input
space. This map can then be used as the starting point for the LISSOM algorithm,
which modifies the topological organi zation and learns lateral connections that
decorrelate and represent a more clear categorization of the input patterns. The
initial self-organizing map was formed in 8 epochs over the training set, grad
ually reducing the neighborhood radius from 20 to 8. The lateral connections were
then added to the system, and over another 30 epochs, the afferent and lateral
weights of the map were adapted according to equations 3 and 4. In the beginning,
the excitation radius was set to 8 and the inhibition radius to 20. The excitation
radius was gradually decreased to 1 making the activity patterns more concentrated
and causing the units to become more selective to particular types of input pat
terns. For comparison, the initial self-organized map was also trained for another
30 epochs, gradually decreasing the neighborhood size to 1 as well. The final
afferent weights for the SOM and LISSOM maps are shown in figures 2 and 3. After
the SOM and LISSOM maps were organized, a complete set of activation patterns
on the two maps were collected. These patterns then formed the training input
for the perceptron layer. Two separate versions were each trained for 500 epochs,
one with SOM and the other with LISSOM patterns. A third perceptron layer was
trained directly with the input bitmaps as well. Recognition performance was measured
by counting how often the most highly ac tive perceptron unit was the correct
one. The results were averaged over the 10 different splits. On average, the final
LISSOMperceptron system correctly recog nized 88.1 of the 992 pattern test sets.
This is significantly better than the 84.1 1 Downloadable at ftp:j jsequoyah.ncsl.nist.gov
jpubjdatabasesj. 740 Y . CHOE, J. SIROSH, R. MIIKKULAINEN Figure 2: Final Afferent
Weights of the SOM map . The digit-like patterns represent the afferent weights
of each map unit projected on the input layer. For example, the lower left corner
represents the afferent weights of unit (0,0). High weight values are shown in
black and low in white. The pattern of weights shows the input pattern to which
this unit is most sensitive (6 in this case). There are local clusters sensitive
to each digit category. of the SOMperceptron system, and the 72.3 achieved by
the perceptron layer alone (Table 1). These results suggest that the internal
representations generated by the LISSOM map are more distinct and easier to recognize
than the raw input patterns and the representations generated by the SOM map .
4 Discussion The architecture was motivated by the hypothesis that the lateral
inhibitory con nections of the LISSOM map would decorrelate and force the map
activity patterns to become more distinct. The recognition could then be performed
by even the simplest classification architectures, such as the perceptron. Indeed,
the LISSOM representations were easier to recognize than the SOM patterns, which
lends evi dential support to the hypothesis. In additional experiments, the percept
ron output layer was replaced by a two-weight-Iayer backpropagation network and
a Hebbian associator net, and trained with the same patterns as the perceptrons.
The recog nition results were practically the same for the perceptron, backpropagation,
and Hebbian output networks, indicating that the internal representations formed
by the LISSOM map are the crucially important part of the recognition system.
A comparison of the learning curves reveals two interesting effects (figure 4).
First, even though the perceptron net trained with the raw input patterns initially
per forms well on the test set, its generalization decreases dramatically during
training. This is because the net only learns to memorize the training examples,
which does not help much with new noisy patterns. Good internal representations
are there fore crucial for generalization. Second , even though initially the
settling process of the LISSOM map forms patterns that are significantly easier
to recognize than Laterally Interconnected Self-organizing Maps in Handwritten
Digit Recognition 741 Figure 3: Final Afferent Weights of the LISSOM map. The
squares identify the above-average inhibitory lateral connections to unit (10,4)
(indicated by the thick square). Note that inhibition comes mostly from areas
of similar functionality (i.e. areas sensitive to similar input), thereby decorrelating
the map activity and forming a sparser representation of the input. the initial,
unsettled patterns (formed through the afferent connections only), this difference
becomes insignificant later during training. The afferent connections are modified
according to the final, settled patterns, and gradually learn to anticipate the
decorrelated internal representations that the lateral connections form. 5 Conclusion
The experiments reported in this paper show that LISSOM forms internal represen
tations of the input patterns that are easier to categorize than the raw inputs
and the patterns on the SOM map, and suggest that LISSOM can form a useful front
end for character recognition systems, and perhaps for other pattern recognition
systems as well (such as speech). The main direction of future work is to apply
the approach to larger data sets, including the full NIST 3 database, to use a
more powerful recognition network instead of the perceptron, and to increase the
map size to obtain a richer representation of the input space. Acknowledgements
This research was supported in part by National Science Foundation under grant
IRI-9309273. Computer time for the simulations was provided by the Pittsburgh
Supercomputing Center under grants IRI930005P and IRI940004P, and by a High Performance
Computer Time Grant from the University of Texas at Austin. References Allinson,
N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of self organising
maps. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems
6. San Mateo, CA: Morgan Kaufmann. 742 Y. CHOE. J. SIROSH. R. MIIKKULAINEN Comparison:Test
''SettIEiCLlSSOU'' - Epochs Figure 4: Comparison of the learning curves, A perceptron
network was trained to recognize four different kinds of internal representations:
the settled LISSOM patterns, the LISSOM patterns before settling, the patterns
on the final SOM network, and raw input bitmaps. The recognition accuracy on the
test set was then measured and averaged over 10 simulations. The generalization
of the raw input perceptron system decreases rapidly as the net learns to memorize
the training patterns. The difference of using settled and unsettled LISSOM patterns
diminishes as the afferent weights of LISSOM learn to take into account the decorrelation
performed by the lateral weights. Denker, J. S., Gardner, W. R., Graf, H. P.,
Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon,
I. (1989). Neural network recognizer for hand-written zip code digits. In Touretzky,
D . S., editor, Advances in Neural Information Processing Systems 1. San Mateo,
CA: Morgan Kaufmann . Fukushima, K., and Wake, N. (1990). Alphanumeric character
recognition by neocognitron. In Advanced Neural Computers, 263-270. Elsevier Science
Pub lishers B.V . (North-Holland). Ie Cun, Y., Boser, B ., Denker, J. S., Henderson,
D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990). Handwritten digit recognition
with a back propagation network. In Touretzky, D. S., editor, Advances in Neural
Infor mation Processing Systems 2. San Mateo, CA: Morgan Kaufmann . Martin, G.
L ., and Pittman, J. A. (1990). Recognizing hand-printed letters and digits. In
Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2.
San Mateo, CA: Morgan Kaufmann. Sirosh, J., and Miikkulainen, R. (1994). Cooperative
self-organization of afferent and lateral connections in cortical maps . Biological
Cybernetics, 71:66-78. Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance
and patterned lateral connections in a self-organizing model of the primary visual
cortex. In Tesauro, G ., Touretzky, D. S., and Leen, T . K., editors, Advances
in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Sirosh,
J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned lateral
interaction in a self-organizing model of the primary visual cortex. Neu ral Computation
(in press).'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@10
- cosine_precision@10
- cosine_recall@10
- cosine_ndcg@5
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@10
model-index:
- name: SentenceTransformer based on NovaSearch/stella_en_400M_v5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@10
value: 0.9466
name: Cosine Accuracy@10
- type: cosine_precision@10
value: 0.09466
name: Cosine Precision@10
- type: cosine_recall@10
value: 0.9466
name: Cosine Recall@10
- type: cosine_ndcg@5
value: 0.8507439067474944
name: Cosine Ndcg@5
- type: cosine_ndcg@10
value: 0.8602810144357889
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8322816666666671
name: Cosine Mrr@10
- type: cosine_map@10
value: 0.8322816666666666
name: Cosine Map@10
---
# SentenceTransformer based on NovaSearch/stella_en_400M_v5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) <!-- at revision dcae70d3f2b4aaee36afc3cde638ca4614497aec -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Effect of input stimulus coding on self-supervised learning performance',
"INTRODUCTION Temporal difference (TD) planning [6, 7] uses prediction for control. Consider an agent moving around a finite grid such as the one in figure 1 (the agent is incapable of crossing the barrier) trying to reach a goal whose position it does not know. If it can predict how far away from the goal it is at the current step, and how far away from the goal it is at the next step, after making a move, then it can decide whether or not that move was helpful or harmful. If, in addition, it can record this fact, then it can learn how to navigate to the goal. This generation of actions from predictions is closely related to the mechanism of dynamical programming. TD is used to learn the predictions in the first place. Consider the agent moving around randomly on the grid, receiving a negative reinforcement of -1 for every move it makes apart from moves which take it onto the goal. In this case, if it can estimat.e from every location it visits, how much reinforcement (discounted by how soon it arrives) it will get before it next reaches the goal, it will be predicting how far away it is, based on the random method of selecting actions. TD's mechanism of learning is to force the predictions to be consistent; the prediction from location a should be -1 more than the average of the predictions from the locations that can be reached in one step (hence the extra -1 reinforcement) from a. 464 Navigating Through Temporal Difference 465 If the agent initially selects each action with the same probability, then the estimate of future reinforcement from a will be monotonically related to how many steps a is away from the goal. This makes the predictions useful for criticising actions as above. In practice, the agent will modify its actions according to this criticism at the same time as learning the predictions based on those actions. Barto, Sutton and Watkins [2] develop this example, and show how the TD mech anism coupled with a punctate representation of the stimulus (referred to as'RBsw below) finds the optimal paths to the goal. 'RBsw ignores the cues shown in figure 1, and devotes one input unit to each location on the grid, which fires if and only if the agent is at that place. TD methods can however work with more general codes. Section 2 considers al ternative representations, including ones that are sensitive to the orientation of the agent as it moves through the grid, and section 3 looks at a restricted form of la. tent learning - what the agent can divine about its environment in the absence of reinforcement. Both techniques can improve the speed of learning. 2 ALTERNATE REPRESENTATIONS Stimulus representations, the means by which the agent finds out from the environ ment where it is, can be classified along two dimensions; whether they are punctate or distributed, and whether they are directionally sensitive or in register with the world. Over most of the grid, a 'sensible' distributed representation, such as a coarse-coded one, would be expected to make learning faster, as information about the value and action functions could be shared across adjacent grid points. There are points of discontinuity in the actions, as in the region above the right hand arm of the barrier, but they are few. In his PhD thesis [9], Watkins considers a rather similar problem to that in figure I, and solves it using his variant ofTD, Q-Iearning, based on a CMAC [1] coarse-coded representation of the space. Since his agent moves in a continuous bounded space, rather than being confined merely to discrete grid points, something of this sort is anyway essential. After the initial learning, Watkins arbitrarily makes the agent move ten times more slowly in a closed section of the space. This has a similar effect to the barrier in inducing a discontinuity in the action space. Despite the CMACS forcing the system to share information across such discontinuities, they were able to learn the task quickly. The other dimension over which representations may vary involves the extent to which they are sensitive to the direction in which the agent is facing. This is of interest if the agent must construe its location from the cues around the grid. In this case, rather than moving North, South, East or West, which are actions registered with the world, the agent should only move Ahead, Left or Right (Behind is disabled as an additional constraint), whose effects are also orientation dependent. This, together with the fact that the representation will be less compact (it having a larger input dimensionality) should make learning slower. Dynamical programming and its equivalents are notoriously subject to Bellman's curse of dimensionality, an engineering equivalent of exponential explosion in search. Table 1 shows four possible representations classified along these two dimensions. 466 Dayan Coarse ness Directionally Punctate Distributed Sensltlve R,x RA Insensltlve 'RBSW 'RCMAC Table 1: Representations. 'RBSW is the representation Barto, Sutton and Watkins used. R,x is punctate and directionally sensitive - it devotes four units to every grid point, one of which fires for each possible orientation of the agent. 'RcIAC' the equivalent of Watkins' representation, was not simulated, because its capabilities would not differ markedly from those of the mapping-based representation developed in the next section. nA is rather different from the other representations; it provides a test of a represen tation which is more directly associated with the sensory information that might be available directly from the cues. Figure 2 shows how 'RA works. Various identifiable cues, C 1 ... C c (c 7 in the figure) are scattered around the outside of the grid, and the agent has a fictitious 'retina' which rotates with it. This retina is divided into a number of angular buckets (8 in the figure), and each bucket has c units, the iSh one of which responds if the cue Ci is visible in that bucket. This representation is clearly directionally sensitive (if the agent is facing a different way, then so is its retina, and so no cue will be visible in the same bucket as it was before), and also distributed, since in general more than one cue will be visible from every location. Note that there is no restriction on the number of units that can fire in each bucket at any time - more than one will fire if more than one cue is visible there. Also, under the present system 'RA will in general not work if its coding is ambiguous - grid points must be distinguishable. Finally, it should be clear that 'RA is not biologically plausible. Figure 3 shows the learning curves for the three representations simulated. Each point is generated by switching off the learning temporarily after a certain number of iterations, starting the agent from everywhere in the grid, and averaging how many steps it takes in getting to the goal over and above the minimum necesary. It is apparent that n.x is substantially worse, but, surprisingly, that 'RA is actually better than 'RBSW . This implies that the added advantage of its distributed na ture more than outweighs its disadvantages of having more components and being directionally sensitive. One of the motivations behind studying alternate representations is the experimen tal findings on place cells in the hippocampi of rats (amongst other species). These are cells that fire only when the rat is at a certain location in its environment. Although their existence has led to many hypotheses about rat cognitive mapping (see [5J for a substantial discussion of place cells and mapping), it is important to note that even with a map, there remains the computational1y intensive problem of navigation addressed, in this paper, by TD. 'RA, being closely related to the input stimuli is quite unlike a place cell code - the other representations all bear some similarities. Navigating Through Temporal Difference 467 3 GOAL-FREE LEARNING One of the problems with the TD system as described is that it is incapable oflatent learning in the absence of reinforcement or a goal. If the goal is just taken away, but the -1 reinforcements are still applied at each step, then the values assigned to each location will tend to -00. If both are removed, then although the agent will wander about its environment with random gay abandon, it will not pick up anything that could be used to speed subsequent learning. Latent learning experiments with rats in dry mazes prove fairly conclusively that rats running mazes in the absence of rewards and punishments learn almost as much as rats that are reinforced. One way to solve this problem is suggested by Sutton's DYNA architecture [7]. Briefly, this constructs a map of place x action - next place, and takes steps in the fictitious world constructed from its map in-between taking steps in the real world, as a way of ironing out the computational 'bumps' (ie inconsistencies) in the value and action functions. Instead, it is possible to avoid constructing a complete map by altering the repre sentation of the environment used for learning the prediction function and optimal actions. The section on representations concluded that coarse-coded representations are generally better than punctate ones, since information can be shared between neighbouring points. However, not all neighbouring points are amenable to this sharing, because of discontinuities in the value and action functions. If there were a way of generating a coarse coded representation (generally from a punctate one) that is sensitive to the structure of the task, rather than arbitrarily assigned by the environment, it should provide the base for faster learning still. In this case, neighbouring points should only be coded together if they are not separated by the barrier. The initial exploration would allow the agent to learn this much about the structure of the environment. Consider a set of units whose job is to predict the future discounted sum of firings of the raw input lines. Using 'R.Bsw during the initial stage of learning when the act.ions are still random, if the agent is at location (3,3) of the grid, say, then the discounted prediction of how often it will be in (3,4) (ie the frequency with which the single unit representing (3,4) will fire) will be high, since this location is close. However, the prediction for (7,11) will be low, because it is very unlikely to get there quickly. Consider the effect of the barrier: locations on opposite sides of it, eg (1,6) and (2,6), though close in the Euclidean (or Manhattan) metric on the grid, are far apart in the task. This means that the discounted prediction of how often the agent will be at (1,6) given that it starts at (2,6), will be proportionately lower. Overall, the prediction units should act like a coarse code, sensitive to the struc ture of the task. As required, this information about the environment is entirely independent of whether or not the agent is reinforced during its exploration. In fact, the resulting 'map' will be more accurate if it is not, as its exploration will be more random. The output of the prediction units is taken as an additional source of information for the value and action functions. Since their main aim is to create intelligently distributed representations from punc tate ones, it is only appropriate to use these prediction units for 'RBsw and 'R4X ' Figure 4 compares average learning curves for 'RBsw with and without these ex-468 Dayan tra mapping units, and with and without 6000 steps of latent learning (LL) in the absence of any reinforcement. A significant improvement is apparent. Figure 5 shows one set of predictions based on the 1lBsw representation! after a few un-reinforced iterations. The predictions are clearly fairly well developed and smooth - a predictable exponentially decaying hump. The only deviations from this are at the barrier and along the edges, where the effects of impermeability and immobility are apparent. Figure 6 shows the same set of predictions but after 2000 reinforced iterations, by which time the agent reaches the goal almost optimally. The predictions degenerate from being roughly radially symmetric (bar the barrier) to being highly asymmetric. Once the agent has learnt how to get to the goal from some location, the path it will follow, and so the locations it will visit from there, is largely fixed. The asymptotic values of the predictions will therefore be 0 for units not on the path, and -( for those on the path, where r is the number of steps since the agent's start point and 'Y is the discounting factor weighting immediate versus distant reinforcement. This is a severe limitation since it implies that the topological information present in the early stages of learning disappears evaporates, and with it almost all the benefits of the prediction units. 4 DISCUSSION Navigation comprises two problems; where the agent and the goals in its environ ment are, and how it can get to them. Having some form of cognitive map, as is suggested by the existence of place cells, addresses the first, but leaves open the second. For the case of one goal, the simple TD method described here is one solution. TD planning methods are clearly robust to changes in the way the input stimu lus is represented. Distributed codes, particularly ones that allow for the barrier, make learning faster. This is even true for 1lA' which is sensitive to the orientation of the agent. All these results require each location to have a unique representa tion - Mozer and Bachrach [4] and Chrisley [3] and references therein look at how ambiguities can be resolved using information on the sequence of states the agent traverses. Since these TD planning methods are totally general, just like dynamical program ming, they are unlikely to scale well. Some evidence for this comes from the rel atively poor performance of 1l.x , with its quadrupled input dimension. This puts the onus back either onto dividing the task into manageable chunks, or onto more sophisticated representation. A cknow ledgements I am very grateful to Jay Buckingham, Kate Jeffrey, Richard Morris, Toby Tyrell, David Willshaw, and the attendees of the PDP Workshop at Edinburgh, the Con nectionist Group at Amherst, and a spatial learning workshop at King's College Cambridge for their helpful comments. This work was funded by SERC. 1 Note that these are normalised to a maximum value of 10, for graphical convenience. Navigating Through Temporal Difference 469 References [1] Albus, JS (1975). A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC). Transactions of the ASME: Journal of Dynamical Systems, Measurement and Control, 97, pp 220-227. [2] Barto, AG, Sutton, RS . Watkins, CJCH (1989). Learning and Sequential Decision Making. Technical Report 89-95, Computer and Information Science, University of Massachusetts, Amherst, MA. [3] Chrisley, RL (1990). Cognitive map construction and use: A parallel dis tributed approach. In DS Touretzky, J Elman, TJ Sejnowski, . GE Hinton, editors, Proceedings of the 1990 Con nectionist M odds Summer School. San Mateo, CA: Morgan Kaufmann. [4] Mozer, MC, . Bachrach, J (1990). Discovering the structure of a reactive en vironment by exploration. In D Touretzky, editor, Advances in Neurallnfor mation Processing Systems, , pp 439-446. San Mateo, CA: Morgan Kaufmann. [5] O'Keefe, J Nadel, L (1978). The Hippocampus as a Cognitive Map. Oxford, England: Oxford University Press. [6] Sutton, RS (1988). Learning to predict by the methods of temporal difference. Machine Learning, 3, pp 9-44. [7] Sutton, RS (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic progranuning. In Proceedings of the Seventh International Conference on Machine Learning. San Mateo, CA: Morgan Kauf [8] Sutton, RS, . Barto, AG. To appear. Time-derivative models of Pavlovian conditioning. In M Gabriel . JW Moore, editors, Learning and Computational Neuroscience. Cambridge, MA: MIT Press. [9J Vatkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. Univer sity of Cambridge, England. Agall arrier OriCIIlltloD 'Retina' Anplar bucket Dot rlrina 1. flrina Fig 2: The 'retina' for 1lA 470 Dayan Average extra steps to goal Learning iterations Fig 3: Different representations Fig 5: Initial predictions from (5,6) Average extra steps to goal Learning iterations Fig 4: Mapping with 'RBSW Fig 6: Predictions after 2000 iterations",
"Introduction Hand-written digit recognition has become one of the touchstone problems in neural networks recently. Large databases of training examples such as the NIST (National Institute of Standards and Technology) Special Database 3 have become available, and real-world applications with clear practical value, such as recognizing zip codes in letters, have emerged. Diverse architectures with varying learning rules have been proposed, including feed-forward networks (Denker et al. 1989; Ie Cun et al. 1990; Martin and Pittman 1990), self-organizing maps (Allinson et al. 1994), and dedicated approaches such as the neocognitron (Fukushima and Wake 1990). The problem is difficult because handwriting varies a lot, some digits are easily confusable, and recognition must be based on small but crucial differences. For ex ample, the digits 3 and 8, 4 and 9, and 1 and 7 have several overlapping segments, and the differences are often lost in the noise. Thus, hand-written digit recogni tion can be seen as a process of identifying the distinct features and producing an internal representation where the significant differences are magnified, making the recognition easier. Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 737 In this paper, the Laterally Interconnected Synergetically Self-Organizing Map ar chitecture (LISSOM; Sirosh and Miikkulainen 1994, 1995, 1996) was employed to form such a separable representation. The lateral inhibitory connections of the LIS SOM map decorrelate features in the input, retaining only those differences that are the most significant. Using LISSOM as a front end, the actual recognition can be performed by any standard neural network architecture, such as the perceptron. The experiments showed that while direct recognition of the digit bitmaps with a simple percept ron network is successful 72.3 of the time, and recognizing them using a standard self-organizing map (SOM) as the front end 84.1 of the time, the recognition rate is 88.1 based on the LISSOM network. These results suggest that LISSOM can serve as an effective front end for real-world handwritten character recognition systems. 2 The Recognition System 2.1 Overall architecture The system consists of two networks: a 20 x 20 LISSOM map performs the feature analysis and decorrelation of the input, and a single layer of 10 perceptrons the final recognition (Figure 1 (a)). The input digit is represented as a bitmap on the 32 x 32 input layer. Each LISSOM unit is fully connected to the input layer through the af ferent connections, and to the other units in the map through lateral excitatory and inhibitory connections (Figure 1 (b)). The excitatory connections are short range, connecting only to the closest neighbors of the unit, but the inhibitory connections cover the whole map . The percept ron layer consists of 10 units, corresponding to digits 0 to 9. The perceptrons are fully connected to the LISSOM map, receiv ing the full activation pattern on the map as their input. The perceptron weights are learned through the delta rule, and the LISSOM afferent and lateral weights through Hebbian learning. 2.2 LISSOM Activity Generation and Weight Adaptation The afferent and lateral weights in LISSOM are learned through Hebbian adapta tion. A bitmap image is presented to the input layer, and the initial activity of the map is calculated as the weighted sum of the input. For unit (i, j), the initial response TJij IS where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight connecting input unit ( a, b) to map unit (i, j), and (7 is a piecewise linear approximation of the sigmoid activation function. The activity is then settled through the lateral connections. Each new activity TJij (t) at step t depends on the afferent activation and the lateral excitation and inhibition: where Eij,kl and Iij,kl are the excitatory and inhibitory connection weights from map unit (k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k, I) during the previous time step. The constants Ie and Ii control the relative strength of the lateral excitation and inhibition. After the activity has settled, the afferent and lateral weights are modified according to the Hebb rule. Afferent weights are normalized so that the length of the weight 738 Y. CHOE, J. SIROSH, R. MIIKKULAINEN Output Layer (10) tII'd Units with excitatory lateral connections to (iJ) Units with inhibitory lateral connections to (iJ) Figure 1: The system architecture. (a) The input layer is activated according to the bitmap image of digit 6. The activation propagates through the afferent connections to the LISSOM map, and settles through its lateral connections into a stable pattern. This pattern is the internal representation of the input that is then recognized by the perceptron layer. Through ,the connections from LISSOM to the perceptrons, the unit representing 6 is strongly activated, with weak activations on other units such as 3 and 8. (b) The lateral connections to unit (i, j), indicated by the dark square, are shown. The neighborhood of excitatory connections (lightly shaded) is elevated from the map for a clearer view. The units in the excitatory region also have inhibitory lateral connections (indicated by medium shading) to the center unit. The excitatory radius is 1 and the inhibitory radius vector remains the same; lateral weights are normalized to keep the sum of weights constant (Sirosh and Miikkulainen 1994): IllJ,mn - VLmn[llij,mn(t) crinp1]ijmnF' (3) where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i, j), and crinp is the input learning rate; Wij ,kl is the lateral weight (either excitatory Eij ,kl or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and cr is the lateral learning rate (either crexc or crinh). 2.3 Percept ron Output Generation and Weight Adaptation The perceptrons at the output of the system receive the activation pattern on the LISSOM map as their input. The perceptrons are trained after the LISSOM map has been organized. The activation for the perceptron unit Om is where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m is the connection weight between LISSOM map unit (i,j) and output layer unit m. The delta rule is used to train the perceptrons: the weight adaptation is proportional to the map activity and the difference between the output and the target: where crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map unit activity, (m is the target activation for unit m. ((m 1 if the correct digit m, 0 otherwise). Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 739 I Representation I Training Test Table 1: Final Recognition Results. The average recognition percentage and its variance over the 10 different splits are shown for the training and test sets. The differences in each set are statistically significant with p .9999. 3 Experiments A subset of 2992 patterns from the NIST Database 3 was used as training and testing data.1 The patterns were normalized to make sure taht each example had an equal effect on the LISSOM map (Sirosh and Miikkulainen 1994). LISSOM was trained with 2000 patterns. Of these, 1700 were used to train the perceptron layer, and the remaining 300 were used as the validation set to determine when to stop training the perceptrons. The final recognition performance of the whole system was measured on the remaining 992 patterns, which neither LISSOM nor the perceptrons had seen during training. The experiment was repeated 10 times with different random splits of the 2992 input patterns into training, validation, and testing sets. The LISSOM map can be organized starting from initially random weights. How ever, if the input dimensionality is large, as it is in case of the 32 X 32 bitmaps, each unit on the map is activated roughly to the same degree, and it is difficult to bootstrap the self-organizing process (Sirosh and Miikkulainen 1994, 1996). The standard Self-Organizing Map algorithm can be used to preorganize the map in this case. The SOM performs preliminary feature analysis of the input, and forms a coarse topological map of the input space. This map can then be used as the starting point for the LISSOM algorithm, which modifies the topological organi zation and learns lateral connections that decorrelate and represent a more clear categorization of the input patterns. The initial self-organizing map was formed in 8 epochs over the training set, grad ually reducing the neighborhood radius from 20 to 8. The lateral connections were then added to the system, and over another 30 epochs, the afferent and lateral weights of the map were adapted according to equations 3 and 4. In the beginning, the excitation radius was set to 8 and the inhibition radius to 20. The excitation radius was gradually decreased to 1 making the activity patterns more concentrated and causing the units to become more selective to particular types of input pat terns. For comparison, the initial self-organized map was also trained for another 30 epochs, gradually decreasing the neighborhood size to 1 as well. The final afferent weights for the SOM and LISSOM maps are shown in figures 2 and 3. After the SOM and LISSOM maps were organized, a complete set of activation patterns on the two maps were collected. These patterns then formed the training input for the perceptron layer. Two separate versions were each trained for 500 epochs, one with SOM and the other with LISSOM patterns. A third perceptron layer was trained directly with the input bitmaps as well. Recognition performance was measured by counting how often the most highly ac tive perceptron unit was the correct one. The results were averaged over the 10 different splits. On average, the final LISSOMperceptron system correctly recog nized 88.1 of the 992 pattern test sets. This is significantly better than the 84.1 1 Downloadable at ftp:j jsequoyah.ncsl.nist.gov jpubjdatabasesj. 740 Y . CHOE, J. SIROSH, R. MIIKKULAINEN Figure 2: Final Afferent Weights of the SOM map . The digit-like patterns represent the afferent weights of each map unit projected on the input layer. For example, the lower left corner represents the afferent weights of unit (0,0). High weight values are shown in black and low in white. The pattern of weights shows the input pattern to which this unit is most sensitive (6 in this case). There are local clusters sensitive to each digit category. of the SOMperceptron system, and the 72.3 achieved by the perceptron layer alone (Table 1). These results suggest that the internal representations generated by the LISSOM map are more distinct and easier to recognize than the raw input patterns and the representations generated by the SOM map . 4 Discussion The architecture was motivated by the hypothesis that the lateral inhibitory con nections of the LISSOM map would decorrelate and force the map activity patterns to become more distinct. The recognition could then be performed by even the simplest classification architectures, such as the perceptron. Indeed, the LISSOM representations were easier to recognize than the SOM patterns, which lends evi dential support to the hypothesis. In additional experiments, the percept ron output layer was replaced by a two-weight-Iayer backpropagation network and a Hebbian associator net, and trained with the same patterns as the perceptrons. The recog nition results were practically the same for the perceptron, backpropagation, and Hebbian output networks, indicating that the internal representations formed by the LISSOM map are the crucially important part of the recognition system. A comparison of the learning curves reveals two interesting effects (figure 4). First, even though the perceptron net trained with the raw input patterns initially per forms well on the test set, its generalization decreases dramatically during training. This is because the net only learns to memorize the training examples, which does not help much with new noisy patterns. Good internal representations are there fore crucial for generalization. Second , even though initially the settling process of the LISSOM map forms patterns that are significantly easier to recognize than Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition 741 Figure 3: Final Afferent Weights of the LISSOM map. The squares identify the above-average inhibitory lateral connections to unit (10,4) (indicated by the thick square). Note that inhibition comes mostly from areas of similar functionality (i.e. areas sensitive to similar input), thereby decorrelating the map activity and forming a sparser representation of the input. the initial, unsettled patterns (formed through the afferent connections only), this difference becomes insignificant later during training. The afferent connections are modified according to the final, settled patterns, and gradually learn to anticipate the decorrelated internal representations that the lateral connections form. 5 Conclusion The experiments reported in this paper show that LISSOM forms internal represen tations of the input patterns that are easier to categorize than the raw inputs and the patterns on the SOM map, and suggest that LISSOM can form a useful front end for character recognition systems, and perhaps for other pattern recognition systems as well (such as speech). The main direction of future work is to apply the approach to larger data sets, including the full NIST 3 database, to use a more powerful recognition network instead of the perceptron, and to increase the map size to obtain a richer representation of the input space. Acknowledgements This research was supported in part by National Science Foundation under grant IRI-9309273. Computer time for the simulations was provided by the Pittsburgh Supercomputing Center under grants IRI930005P and IRI940004P, and by a High Performance Computer Time Grant from the University of Texas at Austin. References Allinson, N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of self organising maps. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 6. San Mateo, CA: Morgan Kaufmann. 742 Y. CHOE. J. SIROSH. R. MIIKKULAINEN Comparison:Test 'SettIEiCLlSSOU' - Epochs Figure 4: Comparison of the learning curves, A perceptron network was trained to recognize four different kinds of internal representations: the settled LISSOM patterns, the LISSOM patterns before settling, the patterns on the final SOM network, and raw input bitmaps. The recognition accuracy on the test set was then measured and averaged over 10 simulations. The generalization of the raw input perceptron system decreases rapidly as the net learns to memorize the training patterns. The difference of using settled and unsettled LISSOM patterns diminishes as the afferent weights of LISSOM learn to take into account the decorrelation performed by the lateral weights. Denker, J. S., Gardner, W. R., Graf, H. P., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D., Baird, H. S., and Guyon, I. (1989). Neural network recognizer for hand-written zip code digits. In Touretzky, D . S., editor, Advances in Neural Information Processing Systems 1. San Mateo, CA: Morgan Kaufmann . Fukushima, K., and Wake, N. (1990). Alphanumeric character recognition by neocognitron. In Advanced Neural Computers, 263-270. Elsevier Science Pub lishers B.V . (North-Holland). Ie Cun, Y., Boser, B ., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, 1. D. (1990). Handwritten digit recognition with a back propagation network. In Touretzky, D. S., editor, Advances in Neural Infor mation Processing Systems 2. San Mateo, CA: Morgan Kaufmann . Martin, G. L ., and Pittman, J. A. (1990). Recognizing hand-printed letters and digits. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2. San Mateo, CA: Morgan Kaufmann. Sirosh, J., and Miikkulainen, R. (1994). Cooperative self-organization of afferent and lateral connections in cortical maps . Biological Cybernetics, 71:66-78. Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance and patterned lateral connections in a self-organizing model of the primary visual cortex. In Tesauro, G ., Touretzky, D. S., and Leen, T . K., editors, Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press. Sirosh, J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned lateral interaction in a self-organizing model of the primary visual cortex. Neu ral Computation (in press).",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@10 | 0.9466 |
| cosine_precision@10 | 0.0947 |
| cosine_recall@10 | 0.9466 |
| cosine_ndcg@5 | 0.8507 |
| **cosine_ndcg@10** | **0.8603** |
| cosine_mrr@10 | 0.8323 |
| cosine_map@10 | 0.8323 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 14,255 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 13.4 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 508.46 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Proposed architecture for time-based pattern recognition in speech, motion, and signatures</code> | <code>INTRODUCTION Recent interest in connectionist, or "neural" networks has emphasized their ability to store, retrieve and process patterns1,2. For most applications, the patterns to be processed are static in the sense that they lack temporal context. Another important class consists of those problems that require the processing of temporal patterns. In these the information to be learned or processed is not a particular pattern but a sequence of patterns. Such problems include speech processing, signature verification, motion detection, and predictive signal processin,r-8. More precisely, temporal pattern processing means that the desired output depends not only on the current input but also on those preceding or following it as well. This implies that two identical inputs at different time steps might yield different desired outputs depending on what patterns precede or follow them . There is another feature characteristic of much temporal pattern processing. Here an entire sequence of...</code> |
| <code>Design approach for stabilizing analog VLSI neural systems</code> | <code>INTRODUCTION The term "lateral inhibition" first arose in neurophysiology to describe a common form of neural circuitry in which the output of each neuron in some population is used to inhibit the response of each of its neighbors. Perhaps the best understood example is the horizontal cell layer in the vertebrate retina, in which lateral inhibition simultaneously enhances intensity edges and acts as an automatic lain control to extend the dynamic range of the retina as a whole. The principle has been used in the design of artificial neural system algorithms by Kohonen 2 and others and in the electronic design of neural chips by Carver Mead et. al.3 ,4. In the VLSI implementation of neural systems, it is convenient to build lateral inhibition networks by using a locally connected on-chip resistive grid. Linear resistors fabricated in, e.g., polysilicon, yield a very compact realization, and nonlinear resistive grids, made from MOS transistors, have been found useful for image segmentati...</code> |
| <code>Neural network classifier using coding theory for improved classification capacity</code> | <code>INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N (2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method. Figure 1 (a) Classification problems versus (b) Error control decoding problems In this paper we are spe...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 500
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.01
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 500
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | cosine_ndcg@10 |
|:------:|:----:|:-------------:|:--------------:|
| 0.0893 | 10 | 0.5247 | 0.8247 |
| 0.1786 | 20 | 0.2625 | 0.8446 |
| 0.2679 | 30 | 0.2159 | 0.8485 |
| 0.3571 | 40 | 0.1849 | 0.8487 |
| 0.4464 | 50 | 0.2149 | 0.8506 |
| 0.5357 | 60 | 0.1538 | 0.8534 |
| 0.625 | 70 | 0.1617 | 0.8547 |
| 0.7143 | 80 | 0.1463 | 0.8575 |
| 0.8036 | 90 | 0.1626 | 0.8592 |
| 0.8929 | 100 | 0.1334 | 0.8598 |
| 0.9821 | 110 | 0.168 | 0.8603 |
### Framework Versions
- Python: 3.12.9
- Sentence Transformers: 3.4.1
- Transformers: 4.50.0
- PyTorch: 2.5.1
- Accelerate: 1.5.2
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
John6666/ri-mix-pony-illustrious-ri-mix-omega-sdxl
|
John6666
| 2025-06-19T09:51:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"illustration",
"2.5D",
"digital illustration",
"pony",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:45:41Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- illustration
- 2.5D
- digital illustration
- pony
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/996495/ri-mix-pony-illustrious?modelVersionId=1916444).
This model created by [phinjo](https://civitai.com/user/phinjo).
|
KirubaLS/Outreach_model_6
|
KirubaLS
| 2025-06-19T09:51:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-06-19T09:49:58Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
NLPGenius/LlamaDastak
|
NLPGenius
| 2025-06-19T09:49:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-11-27T09:40:10Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "What is the step-by-step procedure for the dinking water service in KP?"
generator = pipeline("text-generation", model="NLPGenius/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 2.21.0
- Tokenizers: 0.20.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sitammeur/medgemma-4b-it-sft-lora-crc100k
|
sitammeur
| 2025-06-19T09:47:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"image-text-to-text",
"conversational",
"en",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T17:03:17Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: apache-2.0
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
- confusion_matrix
pipeline_tag: image-text-to-text
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sitammeur/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-Merge3
|
Alvin-LiuJia
| 2025-06-19T09:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"base_model:finetune:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:38:44Z |
---
base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/natural-noob-xl-v-pred-anime-furry-experiment-v30-sdxl
|
John6666
| 2025-06-19T09:33:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"furry",
"illustration",
"vivid colors",
"accuracy",
"detail",
"creativity",
"v-pred",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:finetune:Laxhar/noobai-XL-Vpred-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:27:07Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- furry
- illustration
- vivid colors
- accuracy
- detail
- creativity
- v-pred
- noobai
- illustrious
base_model: Laxhar/noobai-XL-Vpred-1.0
---
Original model is [here](https://civitai.com/models/1641988/natural-noob-xl-v-pred-anime-and-furry-experiment?modelVersionId=1917776).
This model created by [DarkFawkes](https://civitai.com/user/DarkFawkes).
|
senga-ml/dnote-body-auto-lr
|
senga-ml
| 2025-06-19T09:30:57Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T10:44:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/moonmilk-contrast-v10-sdxl
|
John6666
| 2025-06-19T09:27:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"anime style",
"high contrast",
"pastel colors",
"portrait focus",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:21:03Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- anime style
- high contrast
- pastel colors
- portrait focus
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693666/moonmilk-contrast?modelVersionId=1916780).
This model created by [Neural_Lens](https://civitai.com/user/Neural_Lens).
|
musab-mk/functionary_lora_test_model
|
musab-mk
| 2025-06-19T09:26:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:25:53Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** musab-mk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veddhanth/lora-trained-xl-stage-1-597
|
veddhanth
| 2025-06-19T09:24:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-19T08:57:34Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-1-597
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-1-597 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-1-597/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Adarsh203/Llama-3.2-3B-Instruct_cot_lora_model_
|
Adarsh203
| 2025-06-19T09:16:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:15:38Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Adarsh203
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sungkwan2/my_awesome_opus_books_model
|
sungkwan2
| 2025-06-19T09:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-tc-big-en-ko",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T08:59:37Z |
---
library_name: transformers
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-tc-big-en-ko
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4960
- Bleu: 0.007
- Gen Len: 213.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 4.7441 | 1.0 | 50 | 4.4915 | 0.0069 | 212.985 |
| 4.2174 | 2.0 | 100 | 4.4960 | 0.007 | 213.19 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
humendra/chronos-t5-large-fine-tuned-run-33
|
humendra
| 2025-06-19T09:13:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T09:12:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rekhtalabs/ur-2-hi-translit
|
rekhtalabs
| 2025-06-19T09:10:39Z | 0 | 2 | null |
[
"custom-transliterator",
"pytorch",
"transliterations",
"urdu",
"hindi",
"RekhtaLabs",
"Sequence2Sequence",
"Transformers",
"ur",
"hi",
"license:other",
"region:us"
] | null | 2025-06-18T12:06:45Z |
---
license: other
language:
- ur
- hi
tags:
- pytorch
- transliterations
- urdu
- hindi
- RekhtaLabs
- Sequence2Sequence
- Transformers
---

# Urdu to Hindi Transliteration Model (Character-Level)
This is a lightweight Transformer-based model trained for **character-level transliteration** of **Urdu poetry into Hindi script**. The model is specially tuned for literary and poetic text, making it ideal for applications involving shayari, nazm, or ghazals.
# Live Inference
https://rekhtalabs.org/demo/transliterate
## Model Overview
| Feature | Value |
|-------------------------|----------------------------|
| **Architecture** | Transformer (BART-style) |
| **Tokenizer** | Character-level |
| **Total Parameters** | 4M |
| **Source Vocab Size** | 87 (Urdu characters) |
| **Target Vocab Size** | 109 (Hindi characters) |
| **Embedding Size** | 256 |
| **Hidden Size** | 256 (`d_model`) |
| **Feedforward Size** | 512 |
| **Encoder Layers** | 3 |
| **Decoder Layers** | 3 |
| **Attention Heads** | 4 |
| **Max Sequence Length** | 128 characters |
---
## Usage
```python
from huggingface_hub import snapshot_download
path = snapshot_download(
repo_id="rekhtalabs/ur-2-hi-translit",
local_dir="./ur-2-hi-translit",
local_dir_use_symlinks=False
)
cd ur-2-hi-translit
```
```python
pip install -r requirements.txt
```
```python
import torch
import sentencepiece as spm
from torch import nn
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_len=5000):
super().__init__()
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-torch.log(torch.tensor(10000.0)) / d_model))
pe[:, 0::2] = torch.sin(position.float() * div_term)
pe[:, 1::2] = torch.cos(position.float() * div_term)
self.pe = pe.unsqueeze(0)
def forward(self, x):
return x + self.pe[:, :x.size(1)].to(x.device)
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, d_model=256, nhead=4, num_layers=3, dim_feedforward=512, max_len=128):
super().__init__()
self.src_tok_emb = nn.Embedding(src_vocab_size, d_model)
self.tgt_tok_emb = nn.Embedding(tgt_vocab_size, d_model)
self.pos_encoder = PositionalEncoding(d_model, max_len)
self.transformer = nn.Transformer(
d_model=d_model,
nhead=nhead,
num_encoder_layers=num_layers,
num_decoder_layers=num_layers,
dim_feedforward=dim_feedforward,
batch_first=True
)
self.out = nn.Linear(d_model, tgt_vocab_size)
def forward(self, src, tgt):
src = self.pos_encoder(self.src_tok_emb(src))
tgt = self.pos_encoder(self.tgt_tok_emb(tgt))
tgt_input = tgt
tgt_mask = nn.Transformer.generate_square_subsequent_mask(tgt_input.size(1)).to(src.device)
out = self.transformer(src, tgt_input, tgt_mask=tgt_mask)
return self.out(out)
device = torch.device("cpu")
sp_nastaaliq = spm.SentencePieceProcessor(model_file='nastaaliq_char.model')
sp_devanagari = spm.SentencePieceProcessor(model_file='devanagari_char.model')
model = Transformer(
src_vocab_size=sp_nastaaliq.get_piece_size(),
tgt_vocab_size=sp_devanagari.get_piece_size()
)
checkpoint = torch.load("transformer_transliteration_final.pt", map_location=device)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
model.to(device)
def transliterate_urdu_to_hindi(text_urdu, max_len=128):
src_ids = [2] + sp_nastaaliq.encode(text_urdu)[:max_len - 2] + [3]
src_tensor = torch.tensor(src_ids).unsqueeze(0).to(device) # shape: (1, seq_len)
tgt_ids = [2]
tgt_tensor = torch.tensor(tgt_ids).unsqueeze(0).to(device)
for _ in range(max_len):
output = model(src_tensor, tgt_tensor)
next_token_logits = output[0, -1, :]
next_token_id = torch.argmax(next_token_logits).item()
if next_token_id == 3:
break
tgt_ids.append(next_token_id)
tgt_tensor = torch.tensor(tgt_ids).unsqueeze(0).to(device)
return sp_devanagari.decode(tgt_ids[1:])
res=transliterate_urdu_to_hindi("وسوسے دل میں نہ رکھ خوف رسن لے کے نہ چل")
print(res)
```
## Output
```python
वसवसे दिल में न रख ख़ौफ़-ए-रसन ले के न चल
```
---
## Dataset
- Trained on approximately **800,000 Urdu-Hindi sentence pairs**
- Sourced and curated for transliteration.
- Character-level alignment ensured for quality
---
|
JKL0909/AOI_Inspection_model
|
JKL0909
| 2025-06-19T09:10:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:10:07Z |
---
license: apache-2.0
---
|
Kortix/FastApply-7B-v1.0_GGUF
|
Kortix
| 2025-06-19T09:08:11Z | 104 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"fast-apply",
"instant-apply",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-24T09:27:04Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
- fast-apply
- instant-apply
---
> **Remember to use `temperature = 0` for optimal results during inference.**
*🚀 Update May 2025:* For production-grade throughput, we use *[Morph](https://morphllm.com)* (the hosted Fast Apply API powering [SoftGen AI](https://softgen.ai/)).
- Morph hits *~4,500 tok/s* even on huge token diffs
- Larger model trained on millions of examples and tuned for accuracy.
> Stable inference, large free tier, highly recommended if you need serious speed in prod.
# FastApply-7B-v1.0
[Github: kortix-ai/fast-apply](https://github.com/kortix-ai/fast-apply)
[Dataset: Kortix/FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0)
[Try it now on 👉 Google Colab](https://colab.research.google.com/drive/1aBqM8Lqso0Xfgtr75G4LFQivXcChU_36?usp=sharing)
## Model Details
### Basic Information
- **Developed by:** Kortix
- **License:** apache-2.0
- **Finetuned from model:** [unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit)
### Model Description
FastApply-7B-v1.0 is a 7B model designed for instant code application, producing full file edits to power [SoftGen AI](https://softgen.ai/).
It is part of the Fast Apply pipeline for data generation and fine-tuning Qwen2.5 Coder models.
The model achieves high throughput when deployed on fast providers like Fireworks while maintaining high edit accuracy, with a speed of approximately 150 tokens/second.
## Intended Use
FastApply-7B-v1.0 is intended for use in AI-powered code editors and tools that require fast, accurate code modifications. It is particularly well-suited for:
- Instant code application tasks
- Full file edits
- Integration with AI-powered code editors like Aider and PearAI
- Local tools to reduce the cost of frontier model output
## Inference template
FastApply-7B-v1.0 is based on the Qwen2.5 Coder architecture and is fine-tuned for code editing tasks. It uses a specific prompt structure for inference:
```
<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
```
The model's output is structured as:
```
<updated-code>[Full-complete updated file]</updated-code>
```
## Additional Information
For more details on the Fast Apply pipeline, data generation process, and deployment instructions, please refer to the [GitHub repository](https://github.com/kortix-ai/fast-apply).
## How to Use
To use the model, you can load it using the Hugging Face Transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Kortix/FastApply-7B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("Kortix/FastApply-7B-v1.0")
# Prepare your input following the prompt structure mentioned above
input_text = """<|im_start|>system
You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated.<|im_end|>
<|im_start|>user
Merge all changes from the <update> snippet into the <code> below.
- Preserve the code's structure, order, comments, and indentation exactly.
- Output only the updated code, enclosed within <updated-code> and </updated-code> tags.
- Do not include any additional text, explanations, placeholders, ellipses, or code fences.
<code>{original_code}</code>
<update>{update_snippet}</update>
Provide the complete updated code.<|im_end|>
<|im_start|>assistant
"""
input_text = input_text.format(
original_code=original_code,
update_snippet=update_snippet,
).strip()
# Generate the response
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=8192,)
response = tokenizer.decode(output[0][len(input_ids[0]):])
print(response)
# Extract the updated code from the response
updated_code = response.split("<updated-code>")[1].split("</updated-code>")[0]
```
|
seroe/ms-marco-MiniLM-L12-v2-turkish-reranker-triplet
|
seroe
| 2025-06-19T09:03:00Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"cross-encoder",
"generated_from_trainer",
"dataset_size:89964",
"loss:CachedMultipleNegativesRankingLoss",
"text-ranking",
"tr",
"dataset:seroe/vodex-turkish-reranker-triplets",
"arxiv:1908.10084",
"base_model:cross-encoder/ms-marco-MiniLM-L12-v2",
"base_model:finetune:cross-encoder/ms-marco-MiniLM-L12-v2",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-ranking
| 2025-05-13T19:33:21Z |
---
language:
- tr
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:89964
- loss:CachedMultipleNegativesRankingLoss
base_model:
- cross-encoder/ms-marco-MiniLM-L12-v2
datasets:
- seroe/vodex-turkish-reranker-triplets
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: cross-encoder/ms-marco-MiniLM-L12-v2
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: val hard
type: val-hard
metrics:
- type: map
value: 0.6082
name: Map
- type: mrr@10
value: 0.6074
name: Mrr@10
- type: ndcg@10
value: 0.6986
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: test hard
type: test-hard
metrics:
- type: map
value: 0.6059
name: Map
- type: mrr@10
value: 0.6051
name: Mrr@10
- type: ndcg@10
value: 0.6967
name: Ndcg@10
---
# cross-encoder/ms-marco-MiniLM-L12-v2
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) on the [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
## ⚠️ Domain-Specific Warning
This model was fine-tuned on Turkish data specifically sourced from the **telecommunications domain**.
While it performs well on telecom-related tasks such as mobile services, billing, campaigns, and subscription details, it may not generalize well to other domains.
Please assess its performance carefully before applying it outside of telecommunications use cases.
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) <!-- at revision 1427fd652930e4ba29e8149678df786c240d8825 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets)
- **Language:** tr
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("seroe/ms-marco-MiniLM-L12-v2-turkish-reranker-triplet")
# Get scores for pairs of texts
pairs = [
['Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?', 'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.'],
['Kampanya süresince internet hızı nasıl değişebilir?', 'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.'],
["Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?", "Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir."],
['Taahhüt süresi dolmadan internet hizmeti iptal edilirse ne olur?', 'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.'],
['Aylık 15 GB ek paketini nereden satın alabilirim?', 'Bu ek paketi almak için hangi kanalları kullanabilirim?'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?',
[
'Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.',
'Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.',
"Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.",
'Eğer taahhüt süresi bitmeden internet hizmeti iptal edilirse, aboneye sunulan D-Smart hizmeti de iptal edilecektir.',
'Bu ek paketi almak için hangi kanalları kullanabilirim?',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `val-hard` and `test-hard`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | val-hard | test-hard |
|:------------|:---------------------|:---------------------|
| map | 0.6082 (-0.0256) | 0.6059 (-0.0204) |
| mrr@10 | 0.6074 (-0.0264) | 0.6051 (-0.0212) |
| **ndcg@10** | **0.6986 (+0.0633)** | **0.6967 (+0.0686)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### vodex-turkish-reranker-triplets
* Dataset: [vodex-turkish-reranker-triplets](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets) at [ca7d206](https://huggingface.co/datasets/seroe/vodex-turkish-reranker-triplets/tree/ca7d2063ad4fec15fbf739835ab6926e051950c0)
* Size: 89,964 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 57.83 characters</li><li>max: 112 characters</li></ul> | <ul><li>min: 35 characters</li><li>mean: 92.19 characters</li><li>max: 221 characters</li></ul> | <ul><li>min: 31 characters</li><li>mean: 78.41 characters</li><li>max: 143 characters</li></ul> |
* Samples:
| query | positive | negative |
|:-------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| <code>Faturasız tarifelerde yurtdışı mesaj ücretleri ne kadardır?</code> | <code>Yurtdışına gönderilen mesajlar için ücret 75 kuruş olarak belirlenmiştir.</code> | <code>Faturasız tarifelerde yurtdışı mesaj ücretleri 10 kuruş olarak uygulanmaktadır.</code> |
| <code>Kampanya süresince internet hızı nasıl değişebilir?</code> | <code>Kampanya süresince, limit ve altyapının desteklediği azami internet hızına kadar internet hızı yükseltilebilir.</code> | <code>Kampanya süresince internet hızı sabit kalır ve değişiklik yapılamaz.</code> |
| <code>Vodafone'un tarifelerinde KDV ve ÖİV dahil midir?</code> | <code>Vodafone'un tarifelerinde belirtilen ücretlere KDV ve ÖİV dahildir.</code> | <code>Vodafone tarifelerinde KDV ve ÖİV, abonelerin talep etmesi durumunda eklenmektedir.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 10.0,
"num_negatives": 4,
"activation_fn": "torch.nn.modules.activation.Sigmoid",
"mini_batch_size": 32
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 1024
- `learning_rate`: 5e-07
- `weight_decay`: 0.1
- `max_grad_norm`: 0.8
- `warmup_ratio`: 0.25
- `bf16`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `group_by_length`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 1024
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-07
- `weight_decay`: 0.1
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 0.8
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.25
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: True
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | val-hard_ndcg@10 | test-hard_ndcg@10 |
|:------:|:----:|:-------------:|:----------------:|:-----------------:|
| 0.5682 | 50 | - | 0.7103 (+0.0750) | 0.7063 (+0.0782) |
| 1.125 | 100 | 1.3021 | 0.7094 (+0.0741) | 0.7065 (+0.0783) |
| 1.6932 | 150 | - | 0.7041 (+0.0688) | 0.7047 (+0.0765) |
| 2.25 | 200 | 0.9216 | 0.6997 (+0.0643) | 0.6996 (+0.0715) |
| 2.8182 | 250 | - | 0.6986 (+0.0633) | 0.6967 (+0.0686) |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_14_1_49
|
winnieyangwannan
| 2025-06-19T09:02:18Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed7-2025-06-19
|
morturr
| 2025-06-19T09:02:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T03:01:12Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
ash001/ray-train-zero-3-bloom-1B-v2
|
ash001
| 2025-06-19T09:00:48Z | 0 | 0 | null |
[
"bloom",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T08:40:29Z |
---
license: apache-2.0
---
|
PatheticOTD/ppo-LunarLander-v3
|
PatheticOTD
| 2025-06-19T09:00:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:57:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 259.60 +/- 11.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ASIEK/dqn-SpaceInvadersNoFrameskip-v4
|
ASIEK
| 2025-06-19T08:58:36Z | 14 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T04:31:29Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 645.50 +/- 149.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ASIEK -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ASIEK -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ASIEK
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
yezg/qwen2.5-sqlbot-gguf
|
yezg
| 2025-06-19T08:57:35Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T08:18:50Z |
---
base_model: unsloth/qwen2.5-coder-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yezg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
|
morturr
| 2025-06-19T08:57:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T05:38:03Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
John6666/gray-color-25d-model-v10-testing-sdxl
|
John6666
| 2025-06-19T08:56:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T08:50:42Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693405/graycolor-25d-model?modelVersionId=1916475).
This model created by [GrayColor](https://civitai.com/user/GrayColor).
|
udayks/q-FrozenLake-v1-4x4-noSlippery
|
udayks
| 2025-06-19T08:55:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T08:51:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="udayks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sanchit42/qwen3-0.6B-base-29reports-lora256-reason
|
sanchit42
| 2025-06-19T08:49:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:47:53Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ISAAC-XYN1-MATT-KERVI-JAVIER-ISAAC/ORIGINAL.VIDEO.18.ISAAC.XYN1.MATT.KERVI.JAVIER.ISAAC.X.VIRAL.ON.TWITTER
|
ISAAC-XYN1-MATT-KERVI-JAVIER-ISAAC
| 2025-06-19T08:49:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:48:55Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf)
https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf
https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-cudis.pdf
|
phospho-app/Kai-13-gr00t-example_dataset_v2-se6pf
|
phospho-app
| 2025-06-19T08:48:05Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T08:38:18Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [Kai-13/example_dataset_v2](https://huggingface.co/datasets/Kai-13/example_dataset_v2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
New-tutorial-kamal-Kaur-19-videos/FULL.VIDEO.kamal.Kaur.viral.video.Link.viral.On.Social.Media.Official
|
New-tutorial-kamal-Kaur-19-videos
| 2025-06-19T08:44:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:44:50Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf)
https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-tvs-01.pdf
|
zeblok/zeblok
|
zeblok
| 2025-06-19T08:42:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T13:36:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-tutorial-Jobz-Hunting-18-Viral-Videos/FULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official
|
New-tutorial-Jobz-Hunting-18-Viral-Videos
| 2025-06-19T08:41:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T08:41:31Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-uk-01.pdf)
https://caddo.gov/wp-content/uploads/ninja-forms/11/xxx-viral-new-video-media-streams-us-uk-01.pdf
|
JohnDsue771/my-test-gov-mod
|
JohnDsue771
| 2025-06-19T08:37:46Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"region:us"
] | null | 2025-06-19T08:17:19Z |
# My Governance Fine-Tuned Mistral Model
|
convsync/d435ba98-86bd-4206-b368-56cabad52870-my_trained_model
|
convsync
| 2025-06-19T08:36:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:36:22Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** convsync
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mzarev/Meta-Llama-3.1-8B-Instruct_finetuned_tulu-3-sft-personas-instruction-following_1750322003721
|
mzarev
| 2025-06-19T08:35:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:33:24Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mzarev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/Selinaliu1030-gr00t-example_dataset_move_toast-7z8p6
|
phospho-app
| 2025-06-19T08:33:34Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T08:31:21Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 790, in get_data_by_modality
return self.get_video(trajectory_id, key, base_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 658, in get_video
video_timestamp = timestamp[step_indices]
~~~~~~~~~^^^^^^^^^^^^^^
IndexError: index 131 is out of bounds for axis 0 with size 81
0%| | 0/2635 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [Selinaliu1030/example_dataset_move_toast](https://huggingface.co/datasets/Selinaliu1030/example_dataset_move_toast)
- **Wandb run URL**: None
- **Epochs**: 5
- **Batch size**: 10
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
nnilayy/deap-dominance-multi-classification-Kfold-3
|
nnilayy
| 2025-06-19T08:26:26Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T08:26:20Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Srajan04/llama-3.2-3b-it-hindi-intent
|
Srajan04
| 2025-06-19T08:19:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:17:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/deap-arousal-multi-classification-Kfold-3
|
nnilayy
| 2025-06-19T08:14:24Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T08:14:20Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
stewy33/0524_original_augmented_original_with_sdf_honeypot_ignore_comment-f00174ec
|
stewy33
| 2025-06-19T08:13:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T08:10:39Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
a-b-a/bert_XSS_v2_distilled_enhanced
|
a-b-a
| 2025-06-19T08:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:a-b-a/bert_XSS_v2_distilled_enhanced",
"base_model:finetune:a-b-a/bert_XSS_v2_distilled_enhanced",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T08:03:18Z |
---
library_name: transformers
license: apache-2.0
base_model: a-b-a/bert_XSS_v2_distilled_enhanced
tags:
- generated_from_trainer
model-index:
- name: bert_XSS_v2_distilled_enhanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_XSS_v2_distilled_enhanced
This model is a fine-tuned version of [a-b-a/bert_XSS_v2_distilled_enhanced](https://huggingface.co/a-b-a/bert_XSS_v2_distilled_enhanced) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
a-b-a/bert_XSS_v2_distilled
|
a-b-a
| 2025-06-19T08:02:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:a-b-a/bert_XSS_v2_distilled",
"base_model:finetune:a-b-a/bert_XSS_v2_distilled",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T07:59:36Z |
---
library_name: transformers
license: apache-2.0
base_model: a-b-a/bert_XSS_v2_distilled
tags:
- generated_from_trainer
model-index:
- name: bert_XSS_v2_distilled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_XSS_v2_distilled
This model is a fine-tuned version of [a-b-a/bert_XSS_v2_distilled](https://huggingface.co/a-b-a/bert_XSS_v2_distilled) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
fellen/ResNet-101
|
fellen
| 2025-06-19T07:52:56Z | 0 | 0 | null |
[
"AIoT",
"QNN",
"image-classification",
"license:other",
"region:us"
] |
image-classification
| 2025-06-17T03:41:35Z |
---
license: other
license_name: aplux-model-farm-license
license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf
pipeline_tag: image-classification
tags:
- AIoT
- QNN
---

## ResNet-101: Image Classification
ResNet-101 is a deep convolutional neural network in the ResNet (Residual Network) series, introduced by Kaiming He and his team in 2015. ResNet-101 consists of 101 layers and utilizes residual connections (skip connections) to address the vanishing gradient problem in deep networks, allowing it to train very deep structures without loss of accuracy. These residual connections let input features be directly passed to subsequent layers, simplifying training and enhancing model performance. ResNet-101 performs excellently in tasks such as image classification, object detection, and semantic segmentation, with its depth making it suitable for complex tasks requiring high-level feature representation. Despite its larger parameter count, its high accuracy and strong transferability have led to its widespread use in computer vision applications.
### Source model
- Input shape: 224x224
- Number of parameters: 42.49M
- Model size: 169.79M
- Output shape: 1x1000
Source model repository: [ResNet-101](https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [BSD-3-CLAUSE](https://github.com/pytorch/vision/blob/main/LICENSE)
- Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
|
FormlessAI/f3e89571-5c36-4c83-87db-9b0fa9e022d6
|
FormlessAI
| 2025-06-19T07:52:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T04:20:52Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: f3e89571-5c36-4c83-87db-9b0fa9e022d6
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for f3e89571-5c36-4c83-87db-9b0fa9e022d6
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/f3e89571-5c36-4c83-87db-9b0fa9e022d6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/jfinztcj)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
makataomu/q-FrozenLake-v1-4x4-noSlippery
|
makataomu
| 2025-06-19T07:46:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T07:45:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="makataomu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dslighfdsl/Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage3
|
dslighfdsl
| 2025-06-19T07:40:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:alfworld",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T06:24:59Z |
---
datasets: alfworld
library_name: transformers
model_name: Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage3
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage3
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [alfworld](https://huggingface.co/datasets/alfworld) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dslighfdsl/Llama-3.1-8B-Instruct-SFT-CoT-short-full-3-alfworld-stage3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pengliangji2023-carnegie-mellon-university/huggingface/runs/hxubvfeh)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sgonzalezygil/sd-finetuning-dreambooth-v16
|
sgonzalezygil
| 2025-06-19T07:39:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T07:38:21Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/dreamer-dominance-multi-classification-Kfold-2
|
nnilayy
| 2025-06-19T07:26:10Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T07:26:08Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
khanhdang/gemma_test
|
khanhdang
| 2025-06-19T07:26:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T06:58:18Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xiaoyuanliu/Qwen2.5-7B-Instruct-DeepMath10K-PPO
|
xiaoyuanliu
| 2025-06-19T07:25:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T07:18:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RedbeardNZ/bigvgan_v2_44khz_128band_512x
|
RedbeardNZ
| 2025-06-19T07:14:17Z | 0 | 0 |
PyTorch
|
[
"PyTorch",
"neural-vocoder",
"audio-generation",
"audio-to-audio",
"arxiv:2206.04658",
"license:mit",
"region:us"
] |
audio-to-audio
| 2025-06-19T07:14:17Z |
---
license: mit
license_link: https://huggingface.co/nvidia/BigVGAN/blob/main/LICENSE
tags:
- neural-vocoder
- audio-generation
library_name: PyTorch
pipeline_tag: audio-to-audio
---
## BigVGAN: A Universal Neural Vocoder with Large-Scale Training
#### Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, Sungroh Yoon
[[Paper]](https://arxiv.org/abs/2206.04658) - [[Code]](https://github.com/NVIDIA/BigVGAN) - [[Showcase]](https://bigvgan-demo.github.io/) - [[Project Page]](https://research.nvidia.com/labs/adlr/projects/bigvgan/) - [[Weights]](https://huggingface.co/collections/nvidia/bigvgan-66959df3d97fd7d98d97dc9a) - [[Demo]](https://huggingface.co/spaces/nvidia/BigVGAN)
[](https://paperswithcode.com/sota/speech-synthesis-on-libritts?p=bigvgan-a-universal-neural-vocoder-with-large)
<center><img src="https://user-images.githubusercontent.com/15963413/218609148-881e39df-33af-4af9-ab95-1427c4ebf062.png" width="800"></center>
## News
- **Jul 2024 (v2.3):**
- General refactor and code improvements for improved readability.
- Fully fused CUDA kernel of anti-alised activation (upsampling + activation + downsampling) with inference speed benchmark.
- **Jul 2024 (v2.2):** The repository now includes an interactive local demo using gradio.
- **Jul 2024 (v2.1):** BigVGAN is now integrated with 🤗 Hugging Face Hub with easy access to inference using pretrained checkpoints. We also provide an interactive demo on Hugging Face Spaces.
- **Jul 2024 (v2):** We release BigVGAN-v2 along with pretrained checkpoints. Below are the highlights:
- Custom CUDA kernel for inference: we provide a fused upsampling + activation kernel written in CUDA for accelerated inference speed. Our test shows 1.5 - 3x faster speed on a single A100 GPU.
- Improved discriminator and loss: BigVGAN-v2 is trained using a multi-scale sub-band CQT discriminator and a multi-scale mel spectrogram loss.
- Larger training data: BigVGAN-v2 is trained using datasets containing diverse audio types, including speech in multiple languages, environmental sounds, and instruments.
- We provide pretrained checkpoints of BigVGAN-v2 using diverse audio configurations, supporting up to 44 kHz sampling rate and 512x upsampling ratio.
## Installation
This repository contains pretrained BigVGAN checkpoints with easy access to inference and additional `huggingface_hub` support.
If you are interested in training the model and additional functionalities, please visit the official GitHub repository for more information: https://github.com/NVIDIA/BigVGAN
```shell
git lfs install
git clone https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_512x
```
## Usage
Below example describes how you can use BigVGAN: load the pretrained BigVGAN generator from Hugging Face Hub, compute mel spectrogram from input waveform, and generate synthesized waveform using the mel spectrogram as the model's input.
```python
device = 'cuda'
import torch
import bigvgan
import librosa
from meldataset import get_mel_spectrogram
# instantiate the model. You can optionally set use_cuda_kernel=True for faster inference.
model = bigvgan.BigVGAN.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', use_cuda_kernel=False)
# remove weight norm in the model and set to eval mode
model.remove_weight_norm()
model = model.eval().to(device)
# load wav file and compute mel spectrogram
wav_path = '/path/to/your/audio.wav'
wav, sr = librosa.load(wav_path, sr=model.h.sampling_rate, mono=True) # wav is np.ndarray with shape [T_time] and values in [-1, 1]
wav = torch.FloatTensor(wav).unsqueeze(0) # wav is FloatTensor with shape [B(1), T_time]
# compute mel spectrogram from the ground truth audio
mel = get_mel_spectrogram(wav, model.h).to(device) # mel is FloatTensor with shape [B(1), C_mel, T_frame]
# generate waveform from mel
with torch.inference_mode():
wav_gen = model(mel) # wav_gen is FloatTensor with shape [B(1), 1, T_time] and values in [-1, 1]
wav_gen_float = wav_gen.squeeze(0).cpu() # wav_gen is FloatTensor with shape [1, T_time]
# you can convert the generated waveform to 16 bit linear PCM
wav_gen_int16 = (wav_gen_float * 32767.0).numpy().astype('int16') # wav_gen is now np.ndarray with shape [1, T_time] and int16 dtype
```
## Using Custom CUDA Kernel for Synthesis
You can apply the fast CUDA inference kernel by using a parameter `use_cuda_kernel` when instantiating BigVGAN:
```python
import bigvgan
model = bigvgan.BigVGAN.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', use_cuda_kernel=True)
```
When applied for the first time, it builds the kernel using `nvcc` and `ninja`. If the build succeeds, the kernel is saved to `alias_free_activation/cuda/build` and the model automatically loads the kernel. The codebase has been tested using CUDA `12.1`.
Please make sure that both are installed in your system and `nvcc` installed in your system matches the version your PyTorch build is using.
For detail, see the official GitHub repository: https://github.com/NVIDIA/BigVGAN?tab=readme-ov-file#using-custom-cuda-kernel-for-synthesis
## Pretrained Models
We provide the [pretrained models on Hugging Face Collections](https://huggingface.co/collections/nvidia/bigvgan-66959df3d97fd7d98d97dc9a).
One can download the checkpoints of the generator weight (named `bigvgan_generator.pt`) and its discriminator/optimizer states (named `bigvgan_discriminator_optimizer.pt`) within the listed model repositories.
| Model Name | Sampling Rate | Mel band | fmax | Upsampling Ratio | Params | Dataset | Steps | Fine-Tuned |
|:--------------------------------------------------------------------------------------------------------:|:-------------:|:--------:|:-----:|:----------------:|:------:|:--------------------------:|:-----:|:----------:|
| [bigvgan_v2_44khz_128band_512x](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_512x) | 44 kHz | 128 | 22050 | 512 | 122M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_44khz_128band_256x](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_256x) | 44 kHz | 128 | 22050 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_24khz_100band_256x](https://huggingface.co/nvidia/bigvgan_v2_24khz_100band_256x) | 24 kHz | 100 | 12000 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_22khz_80band_256x](https://huggingface.co/nvidia/bigvgan_v2_22khz_80band_256x) | 22 kHz | 80 | 11025 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_22khz_80band_fmax8k_256x](https://huggingface.co/nvidia/bigvgan_v2_22khz_80band_fmax8k_256x) | 22 kHz | 80 | 8000 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_24khz_100band](https://huggingface.co/nvidia/bigvgan_24khz_100band) | 24 kHz | 100 | 12000 | 256 | 112M | LibriTTS | 5M | No |
| [bigvgan_base_24khz_100band](https://huggingface.co/nvidia/bigvgan_base_24khz_100band) | 24 kHz | 100 | 12000 | 256 | 14M | LibriTTS | 5M | No |
| [bigvgan_22khz_80band](https://huggingface.co/nvidia/bigvgan_22khz_80band) | 22 kHz | 80 | 8000 | 256 | 112M | LibriTTS + VCTK + LJSpeech | 5M | No |
| [bigvgan_base_22khz_80band](https://huggingface.co/nvidia/bigvgan_base_22khz_80band) | 22 kHz | 80 | 8000 | 256 | 14M | LibriTTS + VCTK + LJSpeech | 5M | No |
|
bharathkumar1922001/10-speaker-SOTA-4800
|
bharathkumar1922001
| 2025-06-19T07:07:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:canopylabs/3b-hi-pretrain-research_release",
"base_model:adapter:canopylabs/3b-hi-pretrain-research_release",
"region:us"
] | null | 2025-06-19T06:41:57Z |
---
base_model: canopylabs/3b-hi-pretrain-research_release
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
thanhsc02/gemma-12b-it-lora-adapter_1000-longqa-9kself-prompt-3-full
|
thanhsc02
| 2025-06-19T07:01:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-06-19T07:00:39Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
whitedevil0089devil/Roberta_Base
|
whitedevil0089devil
| 2025-06-19T07:00:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"question-answering",
"pytorch",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T10:01:14Z |
---
license: apache-2.0
base_model: roberta-base
tags:
- text-classification
- question-answering
- roberta
- pytorch
- transformers
language:
- en
pipeline_tag: text-classification
---
# Roberta_Base
This is a fine-tuned RoBERTa model for question-answering classification tasks.
## Model Details
- **Base Model**: roberta-base
- **Model Type**: Sequence Classification
- **Language**: English
- **License**: Apache 2.0
## Model Information
- **Number of Classes**: 5
- **Classification Type**: grouped_classification
- **Class Names**: Empty, Word, Short, Medium, Long
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('whitedevil0089devil/Roberta_Base')
model = AutoModelForSequenceClassification.from_pretrained('whitedevil0089devil/Roberta_Base')
# Example usage
question = "Your question here"
inputs = tokenizer(question, return_tensors="pt", truncation=True, padding=True, max_length=384)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(outputs.logits, dim=-1).item()
confidence = predictions[0][predicted_class].item()
print(f"Predicted class: {predicted_class}")
print(f"Confidence: {confidence:.4f}")
```
## Training Details
This model was fine-tuned using:
- **Framework**: PyTorch + Transformers
- **Optimization**: AdamW with learning rate scheduling
- **Training Strategy**: Early stopping with validation monitoring
- **Hardware**: Trained on Google Colab (T4 GPU)
## Intended Use
This model is designed for question-answering classification tasks. It can be used to:
- Classify questions into predefined categories
- Provide automated responses based on question classification
- Support Q&A systems and chatbots
## Limitations
- Model performance depends on the similarity between training data and inference data
- May not generalize well to domains significantly different from training data
- Classification accuracy may vary based on question complexity and length
## Citation
If you use this model, please cite:
```
@misc{roberta-qa-model,
title={Fine-tuned RoBERTa for Question-Answer Classification},
author={Your Name},
year={2024},
url={https://huggingface.co/whitedevil0089devil/Roberta_Base}
}
```
|
thanhsc02/gemma-12b-it-lora-adapter_1000-longqa-9kself-prompt-3
|
thanhsc02
| 2025-06-19T07:00:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-06-19T06:31:45Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
rmdhirr/suja-lorab-ep6-suja-2000
|
rmdhirr
| 2025-06-19T06:49:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-19T06:46:45Z |
---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
vuitton/21v1scrip_34
|
vuitton
| 2025-06-19T06:31:28Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-16T15:35:14Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
JunSotohigashi/super-surf-589
|
JunSotohigashi
| 2025-06-19T06:29:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T02:15:48Z |
---
base_model: meta-llama/Llama-3.1-8B
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/super-surf-589
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/super-surf-589
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/super-surf-589", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/8vys7acj)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak
|
BootesVoid
| 2025-06-19T06:25:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T06:25:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HOT
---
# Cmc2Ydb5U00P4Aqihhkdak7Ru_Cmc2Ynnpx00Phaqihesl8O4Ak
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HOT` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HOT",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak', weight_name='lora.safetensors')
image = pipeline('HOT').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2ydb5u00p4aqihhkdak7ru_cmc2ynnpx00phaqihesl8o4ak/discussions) to add images that show off what you’ve made with this LoRA.
|
KoichiYasuoka/modernbert-large-classical-chinese-ud-square
|
KoichiYasuoka
| 2025-06-19T06:24:35Z | 0 | 0 | null |
[
"pytorch",
"modernbert",
"classical chinese",
"literary chinese",
"ancient chinese",
"token-classification",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/modernbert-large-classical-chinese",
"base_model:finetune:KoichiYasuoka/modernbert-large-classical-chinese",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-06-19T06:22:37Z |
---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/modernbert-large-classical-chinese
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "孟子見梁惠王"
---
# modernbert-large-classical-chinese-ud-square
## Model Description
This is a ModernBERT model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [modernbert-large-classical-chinese](https://huggingface.co/KoichiYasuoka/modernbert-large-classical-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/modernbert-large-classical-chinese-ud-square",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("孟子見梁惠王"))
```
|
EYEDOL/MISTRAL7B_ON_ALPACA4
|
EYEDOL
| 2025-06-19T06:06:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T06:06:05Z |
---
base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
apriasmoro/fc08d115-d555-4674-864a-0dd0ff54f304
|
apriasmoro
| 2025-06-19T06:02:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T05:53:22Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc08d115-d555-4674-864a-0dd0ff54f304
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- 5313f4d1e8057633_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/fc08d115-d555-4674-864a-0dd0ff54f304
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 4
mlflow_experiment_name: /tmp/5313f4d1e8057633_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 25
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: efaf2747-93ba-4914-bbfb-4587efac813b
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: efaf2747-93ba-4914-bbfb-4587efac813b
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# fc08d115-d555-4674-864a-0dd0ff54f304
This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0122 | 1 | 1.0012 |
| 2.6887 | 0.5732 | 47 | 0.9802 |
| 0.638 | 1.1463 | 94 | 0.8698 |
| 1.3412 | 1.7195 | 141 | 0.8378 |
| 0.6638 | 2.2927 | 188 | 0.9116 |
| 0.3695 | 2.8659 | 235 | 0.8509 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_20_2_song_3_49
|
winnieyangwannan
| 2025-06-19T05:59:09Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:06:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bharathkumar1922001/10-speaker-SOTA-3600
|
bharathkumar1922001
| 2025-06-19T05:53:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:canopylabs/3b-hi-pretrain-research_release",
"base_model:adapter:canopylabs/3b-hi-pretrain-research_release",
"region:us"
] | null | 2025-06-19T05:52:30Z |
---
base_model: canopylabs/3b-hi-pretrain-research_release
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_8_2_song_3_49
|
winnieyangwannan
| 2025-06-19T05:53:06Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:13:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_6_2_song_3_49
|
winnieyangwannan
| 2025-06-19T05:51:10Z | 82 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:05:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bharathsj/bio-medical-mixed-8k
|
bharathsj
| 2025-06-19T05:50:26Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T05:43:13Z |
---
license: apache-2.0
---
|
gsdfg18919/tyrel
|
gsdfg18919
| 2025-06-19T05:49:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T05:49:34Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/all-black-background-mukiwp7v3e6j3fd4.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: tyrel
---
# tyrel
<Gallery />
## Trigger words
You should use `tyrel` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gsdfg18919/tyrel/tree/main) them in the Files & versions tab.
|
johngreendr1/c5d305b9-d963-4ec3-af93-6eb3a9227e3a
|
johngreendr1
| 2025-06-19T05:45:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"region:us"
] | null | 2025-06-19T03:53:56Z |
---
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_28_2_song_3_49
|
winnieyangwannan
| 2025-06-19T05:44:43Z | 156 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:06:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yujiepan/bert-base-uncased-sst2-int8-unstructured80-30epoch
|
yujiepan
| 2025-06-19T05:43:52Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"openvino",
"bert",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2023-02-09T17:09:40Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: yujiepan/bert-base-uncased-sst2-int8-unstructured80-30epoch
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9139908256880734
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Joint magnitude pruning, quantization and distillation on BERT-base/SST-2
This model conducts unstructured magnitude pruning, quantization and distillation at the same time when finetuning on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Torch loss: 0.4116
- Torch accuracy: 0.9140
- OpenVINO IR accuracy: 0.9106
- Sparsity in transformer block linear layers: 0.80
## Setup
```
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
git clone https://github.com/yujiepan-work/optimum-intel.git
git checkout -b "magnitude-pruning" 01927af543eaea8678671bf8f4eb78fdb29f8930
cd optimum-intel
pip install -e .[openvino,nncf]
cd examples/openvino/text-classification/
pip install -r requirements.txt
pip install wandb # optional
```
## NNCF config
See `nncf_config.json` in this repo.
## Run
We use one card for training.
```
NNCFCFG=/path/to/nncf/config
python run_glue.py \
--lr_scheduler_type cosine_with_restarts \
--cosine_cycle_ratios 8,6,4,4,4,4 \
--cosine_cycle_decays 1,1,1,1,1,1 \
--save_best_model_after_epoch -1 \
--save_best_model_after_sparsity 0.7999 \
--model_name_or_path textattack/bert-base-uncased-SST-2 \
--teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
--distillation_temperature 2 \
--task_name sst2 \
--nncf_compression_config $NNCFCFG \
--distillation_weight 0.95 \
--output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80-30epoch \
--run_name bert-base-uncased-sst2-int8-unstructured80-30epoch \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 5e-05 \
--optim adamw_torch \
--num_train_epochs 30 \
--logging_steps 1 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_strategy steps \
--save_steps 250 \
--save_total_limit 1 \
--fp16 \
--seed 1
```
The best model checkpoint is stored in the `best_model` folder. Here we only upload that checkpoint folder together with some config files.
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
For a full description of the environment, please refer to `pip-requirements.txt` and `conda-requirements.txt`.
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition-same_last_layer_26_2_song_3_49
|
winnieyangwannan
| 2025-06-19T05:42:59Z | 42 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-02T17:05:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jusjinuk/Llama-2-13b-hf-4bit-GuidedQuant-QTIP
|
jusjinuk
| 2025-06-19T05:41:03Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2505.07004",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:quantized:meta-llama/Llama-2-13b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T05:05:57Z |
---
base_model:
- meta-llama/Llama-2-13b-hf
base_model_relation: quantized
license: llama2
---
# Model Card
- Base model: `meta-llama/Llama-2-13b-hf`
- Quantization method: BlockLDLQ with GuidedQuant Hessian
- Target bit-width: 4
- Backend kernel: QTIP kernel (HYB variant)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
- num_groups (for GuidedQuant Hessian): 4
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant and https://github.com/Cornell-RelaxML/qtip
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
Bunpot/qwen3-14b-instruct
|
Bunpot
| 2025-06-19T05:36:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T05:35:41Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bunpot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jusjinuk/Llama-2-70b-hf-2bit-SqueezeLLM
|
jusjinuk
| 2025-06-19T05:35:17Z | 60 | 0 | null |
[
"pytorch",
"llama",
"arxiv:2505.07004",
"base_model:meta-llama/Llama-2-70b-hf",
"base_model:quantized:meta-llama/Llama-2-70b-hf",
"license:llama2",
"region:us"
] | null | 2025-05-20T15:51:36Z |
---
base_model:
- meta-llama/Llama-2-70b-hf
base_model_relation: quantized
license: llama2
---
# Model Card
- Base model: `meta-llama/Llama-2-70b-hf`
- Quantization method: SqueezeLLM
- Target bit-width: 2
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
jusjinuk/Llama-2-70b-hf-4bit-LNQ
|
jusjinuk
| 2025-06-19T05:32:08Z | 29 | 0 | null |
[
"pytorch",
"llama",
"arxiv:2505.07004",
"base_model:meta-llama/Llama-2-70b-hf",
"base_model:quantized:meta-llama/Llama-2-70b-hf",
"license:llama2",
"region:us"
] | null | 2025-05-20T11:37:48Z |
---
base_model:
- meta-llama/Llama-2-70b-hf
base_model_relation: quantized
license: llama2
---
# Model Card
- Base model: `meta-llama/Llama-2-70b-hf`
- Quantization method: LNQ
- Target bit-width: 4
- Backend kernel: Any-Precision-LLM kernel (`ap-gemv`)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
# How to run
- Follow the instruction in https://github.com/snu-mllab/GuidedQuant.
# References
- [Model Paper](https://arxiv.org/abs/2505.07004)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.