Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Aaron82352/length_generalization_testing | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:10:41+00:00 |
text-generation | transformers | {} | ASaska/tamasi-1000-ft | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:11:45+00:00 |
|
text2text-generation | transformers | {} | Megatron17/results | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:12:37+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6032
- F1 Score: 0.7332
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6306 | 3.92 | 200 | 0.5759 | 0.6920 | 0.6926 |
| 0.5587 | 7.84 | 400 | 0.5459 | 0.7346 | 0.7346 |
| 0.524 | 11.76 | 600 | 0.5219 | 0.7490 | 0.7494 |
| 0.4993 | 15.69 | 800 | 0.5213 | 0.7433 | 0.7469 |
| 0.4824 | 19.61 | 1000 | 0.5078 | 0.7651 | 0.7654 |
| 0.4644 | 23.53 | 1200 | 0.5297 | 0.7406 | 0.7444 |
| 0.4475 | 27.45 | 1400 | 0.5143 | 0.7650 | 0.7654 |
| 0.4334 | 31.37 | 1600 | 0.5257 | 0.7593 | 0.7593 |
| 0.4156 | 35.29 | 1800 | 0.5306 | 0.7616 | 0.7617 |
| 0.4 | 39.22 | 2000 | 0.5502 | 0.7629 | 0.7630 |
| 0.3899 | 43.14 | 2200 | 0.5512 | 0.7752 | 0.7753 |
| 0.3778 | 47.06 | 2400 | 0.5614 | 0.7612 | 0.7617 |
| 0.3596 | 50.98 | 2600 | 0.6174 | 0.7587 | 0.7593 |
| 0.3538 | 54.9 | 2800 | 0.5910 | 0.7521 | 0.7531 |
| 0.3416 | 58.82 | 3000 | 0.6229 | 0.7593 | 0.7593 |
| 0.3294 | 62.75 | 3200 | 0.6087 | 0.7652 | 0.7654 |
| 0.3217 | 66.67 | 3400 | 0.6179 | 0.7664 | 0.7667 |
| 0.3095 | 70.59 | 3600 | 0.6788 | 0.7593 | 0.7593 |
| 0.2974 | 74.51 | 3800 | 0.6854 | 0.7510 | 0.7519 |
| 0.286 | 78.43 | 4000 | 0.6915 | 0.7564 | 0.7568 |
| 0.279 | 82.35 | 4200 | 0.7428 | 0.7630 | 0.7630 |
| 0.2706 | 86.27 | 4400 | 0.7287 | 0.7665 | 0.7667 |
| 0.2634 | 90.2 | 4600 | 0.7211 | 0.7528 | 0.7531 |
| 0.2573 | 94.12 | 4800 | 0.7345 | 0.7628 | 0.7630 |
| 0.2504 | 98.04 | 5000 | 0.7398 | 0.7599 | 0.7605 |
| 0.2383 | 101.96 | 5200 | 0.7890 | 0.7544 | 0.7543 |
| 0.2385 | 105.88 | 5400 | 0.7732 | 0.7482 | 0.7481 |
| 0.2276 | 109.8 | 5600 | 0.8023 | 0.7556 | 0.7556 |
| 0.2271 | 113.73 | 5800 | 0.7904 | 0.7587 | 0.7593 |
| 0.2251 | 117.65 | 6000 | 0.8021 | 0.7555 | 0.7556 |
| 0.2163 | 121.57 | 6200 | 0.8689 | 0.7469 | 0.7469 |
| 0.2135 | 125.49 | 6400 | 0.8869 | 0.7432 | 0.7432 |
| 0.2045 | 129.41 | 6600 | 0.9004 | 0.7445 | 0.7444 |
| 0.2038 | 133.33 | 6800 | 0.8614 | 0.7456 | 0.7457 |
| 0.2045 | 137.25 | 7000 | 0.8644 | 0.7568 | 0.7568 |
| 0.1986 | 141.18 | 7200 | 0.8741 | 0.7568 | 0.7568 |
| 0.1924 | 145.1 | 7400 | 0.8985 | 0.7455 | 0.7457 |
| 0.1941 | 149.02 | 7600 | 0.9052 | 0.7482 | 0.7481 |
| 0.1938 | 152.94 | 7800 | 0.8921 | 0.7467 | 0.7469 |
| 0.1896 | 156.86 | 8000 | 0.9117 | 0.7430 | 0.7432 |
| 0.1822 | 160.78 | 8200 | 0.9299 | 0.7432 | 0.7432 |
| 0.1812 | 164.71 | 8400 | 0.9327 | 0.7531 | 0.7531 |
| 0.1882 | 168.63 | 8600 | 0.9083 | 0.7420 | 0.7420 |
| 0.1805 | 172.55 | 8800 | 0.9239 | 0.7482 | 0.7481 |
| 0.1764 | 176.47 | 9000 | 0.9368 | 0.7494 | 0.7494 |
| 0.1778 | 180.39 | 9200 | 0.9469 | 0.7519 | 0.7519 |
| 0.173 | 184.31 | 9400 | 0.9455 | 0.7457 | 0.7457 |
| 0.174 | 188.24 | 9600 | 0.9456 | 0.7470 | 0.7469 |
| 0.1723 | 192.16 | 9800 | 0.9487 | 0.7482 | 0.7481 |
| 0.1772 | 196.08 | 10000 | 0.9479 | 0.7469 | 0.7469 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:12:44+00:00 |
null | transformers | {} | luonluonvn/small100_ct2_quant_int8 | null | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:12:44+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5799
- F1 Score: 0.7317
- Accuracy: 0.7321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6499 | 3.92 | 200 | 0.6000 | 0.6521 | 0.6556 |
| 0.5979 | 7.84 | 400 | 0.5797 | 0.6883 | 0.6889 |
| 0.5761 | 11.76 | 600 | 0.5590 | 0.7086 | 0.7086 |
| 0.5595 | 15.69 | 800 | 0.5484 | 0.7281 | 0.7284 |
| 0.5452 | 19.61 | 1000 | 0.5436 | 0.7247 | 0.7247 |
| 0.5349 | 23.53 | 1200 | 0.5486 | 0.7225 | 0.7284 |
| 0.5213 | 27.45 | 1400 | 0.5208 | 0.7467 | 0.7469 |
| 0.5139 | 31.37 | 1600 | 0.5157 | 0.7528 | 0.7531 |
| 0.5016 | 35.29 | 1800 | 0.5107 | 0.7578 | 0.7580 |
| 0.4946 | 39.22 | 2000 | 0.5147 | 0.7518 | 0.7519 |
| 0.4891 | 43.14 | 2200 | 0.5051 | 0.7629 | 0.7630 |
| 0.4845 | 47.06 | 2400 | 0.5063 | 0.7593 | 0.7593 |
| 0.4786 | 50.98 | 2600 | 0.5183 | 0.7564 | 0.7568 |
| 0.4707 | 54.9 | 2800 | 0.5015 | 0.7582 | 0.7593 |
| 0.4689 | 58.82 | 3000 | 0.5044 | 0.7640 | 0.7642 |
| 0.4638 | 62.75 | 3200 | 0.4977 | 0.7660 | 0.7667 |
| 0.4597 | 66.67 | 3400 | 0.5005 | 0.7640 | 0.7642 |
| 0.46 | 70.59 | 3600 | 0.5013 | 0.7629 | 0.7630 |
| 0.4543 | 74.51 | 3800 | 0.5016 | 0.7613 | 0.7617 |
| 0.4488 | 78.43 | 4000 | 0.5016 | 0.7595 | 0.7605 |
| 0.4468 | 82.35 | 4200 | 0.5019 | 0.7611 | 0.7617 |
| 0.4416 | 86.27 | 4400 | 0.5146 | 0.7655 | 0.7654 |
| 0.4443 | 90.2 | 4600 | 0.5032 | 0.7619 | 0.7630 |
| 0.4386 | 94.12 | 4800 | 0.5068 | 0.7616 | 0.7617 |
| 0.4377 | 98.04 | 5000 | 0.5030 | 0.7658 | 0.7667 |
| 0.4332 | 101.96 | 5200 | 0.5148 | 0.7667 | 0.7667 |
| 0.429 | 105.88 | 5400 | 0.5096 | 0.7603 | 0.7605 |
| 0.43 | 109.8 | 5600 | 0.5135 | 0.7618 | 0.7617 |
| 0.4269 | 113.73 | 5800 | 0.5132 | 0.7639 | 0.7642 |
| 0.4278 | 117.65 | 6000 | 0.5193 | 0.7581 | 0.7580 |
| 0.4235 | 121.57 | 6200 | 0.5165 | 0.7677 | 0.7679 |
| 0.4246 | 125.49 | 6400 | 0.5134 | 0.7676 | 0.7679 |
| 0.4193 | 129.41 | 6600 | 0.5175 | 0.7605 | 0.7605 |
| 0.4188 | 133.33 | 6800 | 0.5150 | 0.7665 | 0.7667 |
| 0.4207 | 137.25 | 7000 | 0.5140 | 0.7700 | 0.7704 |
| 0.417 | 141.18 | 7200 | 0.5174 | 0.7713 | 0.7716 |
| 0.4105 | 145.1 | 7400 | 0.5207 | 0.7664 | 0.7667 |
| 0.4136 | 149.02 | 7600 | 0.5199 | 0.7653 | 0.7654 |
| 0.416 | 152.94 | 7800 | 0.5139 | 0.7724 | 0.7728 |
| 0.4132 | 156.86 | 8000 | 0.5164 | 0.7686 | 0.7691 |
| 0.4086 | 160.78 | 8200 | 0.5218 | 0.7701 | 0.7704 |
| 0.4089 | 164.71 | 8400 | 0.5229 | 0.7677 | 0.7679 |
| 0.4116 | 168.63 | 8600 | 0.5170 | 0.7688 | 0.7691 |
| 0.4085 | 172.55 | 8800 | 0.5201 | 0.7724 | 0.7728 |
| 0.4071 | 176.47 | 9000 | 0.5198 | 0.7713 | 0.7716 |
| 0.4071 | 180.39 | 9200 | 0.5193 | 0.7712 | 0.7716 |
| 0.4024 | 184.31 | 9400 | 0.5221 | 0.7726 | 0.7728 |
| 0.4033 | 188.24 | 9600 | 0.5230 | 0.7726 | 0.7728 |
| 0.4081 | 192.16 | 9800 | 0.5206 | 0.7726 | 0.7728 |
| 0.4032 | 196.08 | 10000 | 0.5208 | 0.7738 | 0.7741 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:12:44+00:00 |
text-generation | transformers |
# Microllama-300.500kmerge
Microllama-300.500kmerge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Corianas/Microllama_Char_500k_step](https://huggingface.co/Corianas/Microllama_Char_500k_step)
* [Corianas/Microllama_Char_300k_step](https://huggingface.co/Corianas/Microllama_Char_300k_step)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Corianas/Microllama_Char_500k_step
layer_range: [0, 12]
- model: Corianas/Microllama_Char_300k_step
layer_range: [0, 12]
merge_method: slerp
base_model: Corianas/Microllama_Char_300k_step
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Corianas/Microllama-300.500kmerge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "Corianas/Microllama_Char_500k_step", "Corianas/Microllama_Char_300k_step"], "base_model": ["Corianas/Microllama_Char_500k_step", "Corianas/Microllama_Char_300k_step"]} | Corianas/Microllama-300.500kmerge | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Corianas/Microllama_Char_500k_step",
"Corianas/Microllama_Char_300k_step",
"base_model:Corianas/Microllama_Char_500k_step",
"base_model:Corianas/Microllama_Char_300k_step",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:13:04+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adapterhappy2sad-study-50-0.006 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:13:04+00:00 |
null | null | {"license": "cc"} | KlaskyCsupoRoboSplaat/AbbyHatcherAI | null | [
"license:cc",
"region:us"
]
| null | 2024-04-27T05:14:29+00:00 |
|
text-generation | transformers | {} | Smd-Arshad/llama-senseai | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:14:29+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:17:58+00:00 |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | himanshubeniwal/mt5-base-finetuned-kk-to-en-filthy-Indian | null | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:18:16+00:00 |
null | null | {"license": "openrail"} | MinLeo/INTAK-AllRounder | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T05:18:17+00:00 |
|
text-generation | transformers | # output-model-directory
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* /workspace/sigrid-llm-lab/layer_locked_raw_sk
* /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: /workspace/sigrid-llm-lab/layer_locked_raw_sk
layer_range: [0, 15]
- sources:
- model: /workspace/sigrid-llm-lab/sigrid-llm-lab/sigrid-llm-lab/layer_locked_inst
layer_range: [16, 17]
merge_method: passthrough
dtype: float16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []} | sigridjineth/gemma-2b-var | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:19:12+00:00 |
null | null | {} | ddddd3424/zh_HK | null | [
"region:us"
]
| null | 2024-04-27T05:22:52+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2479
- F1 Score: 0.8899
- Accuracy: 0.8900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5022 | 0.47 | 200 | 0.3628 | 0.8360 | 0.8360 |
| 0.3809 | 0.95 | 400 | 0.3171 | 0.8588 | 0.8589 |
| 0.3403 | 1.42 | 600 | 0.2940 | 0.8722 | 0.8723 |
| 0.3318 | 1.9 | 800 | 0.2831 | 0.8742 | 0.8744 |
| 0.3115 | 2.37 | 1000 | 0.2759 | 0.8757 | 0.8758 |
| 0.3039 | 2.84 | 1200 | 0.2728 | 0.8785 | 0.8787 |
| 0.2895 | 3.32 | 1400 | 0.2651 | 0.8808 | 0.8809 |
| 0.2934 | 3.79 | 1600 | 0.2643 | 0.8829 | 0.8829 |
| 0.2865 | 4.27 | 1800 | 0.2663 | 0.8832 | 0.8835 |
| 0.2807 | 4.74 | 2000 | 0.2628 | 0.8841 | 0.8841 |
| 0.282 | 5.21 | 2200 | 0.2592 | 0.8859 | 0.8861 |
| 0.2762 | 5.69 | 2400 | 0.2551 | 0.8873 | 0.8873 |
| 0.2743 | 6.16 | 2600 | 0.2550 | 0.8881 | 0.8882 |
| 0.2698 | 6.64 | 2800 | 0.2528 | 0.8894 | 0.8894 |
| 0.2758 | 7.11 | 3000 | 0.2541 | 0.8888 | 0.8888 |
| 0.2661 | 7.58 | 3200 | 0.2570 | 0.8879 | 0.8879 |
| 0.2729 | 8.06 | 3400 | 0.2482 | 0.8884 | 0.8885 |
| 0.2621 | 8.53 | 3600 | 0.2524 | 0.8897 | 0.8897 |
| 0.2682 | 9.0 | 3800 | 0.2485 | 0.8909 | 0.8909 |
| 0.2611 | 9.48 | 4000 | 0.2493 | 0.8910 | 0.8912 |
| 0.2657 | 9.95 | 4200 | 0.2482 | 0.8919 | 0.8919 |
| 0.259 | 10.43 | 4400 | 0.2476 | 0.8903 | 0.8903 |
| 0.2589 | 10.9 | 4600 | 0.2496 | 0.8924 | 0.8924 |
| 0.254 | 11.37 | 4800 | 0.2481 | 0.8895 | 0.8895 |
| 0.263 | 11.85 | 5000 | 0.2457 | 0.8916 | 0.8916 |
| 0.2601 | 12.32 | 5200 | 0.2521 | 0.8880 | 0.8881 |
| 0.2584 | 12.8 | 5400 | 0.2491 | 0.8909 | 0.8909 |
| 0.2591 | 13.27 | 5600 | 0.2435 | 0.8895 | 0.8895 |
| 0.252 | 13.74 | 5800 | 0.2433 | 0.8917 | 0.8918 |
| 0.256 | 14.22 | 6000 | 0.2443 | 0.8907 | 0.8907 |
| 0.2522 | 14.69 | 6200 | 0.2450 | 0.8923 | 0.8924 |
| 0.2555 | 15.17 | 6400 | 0.2464 | 0.8885 | 0.8885 |
| 0.2557 | 15.64 | 6600 | 0.2427 | 0.8907 | 0.8907 |
| 0.2506 | 16.11 | 6800 | 0.2408 | 0.8923 | 0.8924 |
| 0.2497 | 16.59 | 7000 | 0.2427 | 0.8922 | 0.8922 |
| 0.2558 | 17.06 | 7200 | 0.2423 | 0.8921 | 0.8921 |
| 0.2495 | 17.54 | 7400 | 0.2455 | 0.8906 | 0.8906 |
| 0.2528 | 18.01 | 7600 | 0.2410 | 0.8919 | 0.8919 |
| 0.25 | 18.48 | 7800 | 0.2424 | 0.8921 | 0.8921 |
| 0.2518 | 18.96 | 8000 | 0.2404 | 0.8929 | 0.8930 |
| 0.2499 | 19.43 | 8200 | 0.2430 | 0.8919 | 0.8919 |
| 0.2512 | 19.91 | 8400 | 0.2399 | 0.8916 | 0.8916 |
| 0.2519 | 20.38 | 8600 | 0.2407 | 0.8924 | 0.8924 |
| 0.2464 | 20.85 | 8800 | 0.2395 | 0.8938 | 0.8938 |
| 0.2462 | 21.33 | 9000 | 0.2405 | 0.8931 | 0.8931 |
| 0.2465 | 21.8 | 9200 | 0.2414 | 0.8934 | 0.8934 |
| 0.2502 | 22.27 | 9400 | 0.2405 | 0.8930 | 0.8930 |
| 0.2446 | 22.75 | 9600 | 0.2399 | 0.8931 | 0.8931 |
| 0.2504 | 23.22 | 9800 | 0.2400 | 0.8926 | 0.8927 |
| 0.2509 | 23.7 | 10000 | 0.2402 | 0.8934 | 0.8934 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:23:02+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0391
- F1 Score: 0.7235
- Accuracy: 0.7235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.613 | 3.92 | 200 | 0.5489 | 0.7276 | 0.7284 |
| 0.5242 | 7.84 | 400 | 0.5311 | 0.7370 | 0.7370 |
| 0.4811 | 11.76 | 600 | 0.5174 | 0.7455 | 0.7457 |
| 0.4366 | 15.69 | 800 | 0.5271 | 0.7463 | 0.7494 |
| 0.3983 | 19.61 | 1000 | 0.5791 | 0.7438 | 0.7444 |
| 0.3539 | 23.53 | 1200 | 0.6226 | 0.7650 | 0.7654 |
| 0.3137 | 27.45 | 1400 | 0.6891 | 0.7506 | 0.7506 |
| 0.2833 | 31.37 | 1600 | 0.7565 | 0.7388 | 0.7395 |
| 0.2452 | 35.29 | 1800 | 0.7811 | 0.7330 | 0.7333 |
| 0.2181 | 39.22 | 2000 | 0.9093 | 0.7487 | 0.7494 |
| 0.1983 | 43.14 | 2200 | 0.9329 | 0.7527 | 0.7531 |
| 0.1789 | 47.06 | 2400 | 0.9086 | 0.7543 | 0.7543 |
| 0.1606 | 50.98 | 2600 | 0.9805 | 0.7654 | 0.7654 |
| 0.1529 | 54.9 | 2800 | 0.9168 | 0.7615 | 0.7617 |
| 0.1377 | 58.82 | 3000 | 1.0383 | 0.7419 | 0.7420 |
| 0.1267 | 62.75 | 3200 | 1.0284 | 0.7506 | 0.7506 |
| 0.1125 | 66.67 | 3400 | 1.1102 | 0.7479 | 0.7481 |
| 0.104 | 70.59 | 3600 | 1.2252 | 0.7442 | 0.7444 |
| 0.0937 | 74.51 | 3800 | 1.1755 | 0.7531 | 0.7531 |
| 0.094 | 78.43 | 4000 | 1.2074 | 0.7432 | 0.7432 |
| 0.0907 | 82.35 | 4200 | 1.2251 | 0.7420 | 0.7420 |
| 0.079 | 86.27 | 4400 | 1.2857 | 0.7505 | 0.7506 |
| 0.0765 | 90.2 | 4600 | 1.2619 | 0.7531 | 0.7531 |
| 0.0733 | 94.12 | 4800 | 1.2980 | 0.7593 | 0.7593 |
| 0.0688 | 98.04 | 5000 | 1.3034 | 0.7642 | 0.7642 |
| 0.0658 | 101.96 | 5200 | 1.2959 | 0.7567 | 0.7568 |
| 0.0614 | 105.88 | 5400 | 1.3782 | 0.7502 | 0.7506 |
| 0.0607 | 109.8 | 5600 | 1.3433 | 0.7481 | 0.7481 |
| 0.0589 | 113.73 | 5800 | 1.3985 | 0.7555 | 0.7556 |
| 0.0547 | 117.65 | 6000 | 1.3775 | 0.7567 | 0.7568 |
| 0.0517 | 121.57 | 6200 | 1.4986 | 0.7481 | 0.7481 |
| 0.0518 | 125.49 | 6400 | 1.5264 | 0.7491 | 0.7494 |
| 0.0487 | 129.41 | 6600 | 1.4869 | 0.7493 | 0.7494 |
| 0.0467 | 133.33 | 6800 | 1.4509 | 0.7519 | 0.7519 |
| 0.0477 | 137.25 | 7000 | 1.4770 | 0.7494 | 0.7494 |
| 0.0465 | 141.18 | 7200 | 1.4356 | 0.7543 | 0.7543 |
| 0.0409 | 145.1 | 7400 | 1.5309 | 0.7493 | 0.7494 |
| 0.0415 | 149.02 | 7600 | 1.5781 | 0.7542 | 0.7543 |
| 0.0373 | 152.94 | 7800 | 1.6046 | 0.7531 | 0.7531 |
| 0.0396 | 156.86 | 8000 | 1.6092 | 0.7506 | 0.7506 |
| 0.0375 | 160.78 | 8200 | 1.6032 | 0.7531 | 0.7531 |
| 0.0354 | 164.71 | 8400 | 1.5828 | 0.7618 | 0.7617 |
| 0.0372 | 168.63 | 8600 | 1.6199 | 0.7467 | 0.7469 |
| 0.0338 | 172.55 | 8800 | 1.6226 | 0.7518 | 0.7519 |
| 0.0348 | 176.47 | 9000 | 1.6164 | 0.7603 | 0.7605 |
| 0.033 | 180.39 | 9200 | 1.5916 | 0.7518 | 0.7519 |
| 0.0348 | 184.31 | 9400 | 1.5746 | 0.7555 | 0.7556 |
| 0.0342 | 188.24 | 9600 | 1.5826 | 0.7543 | 0.7543 |
| 0.0323 | 192.16 | 9800 | 1.5919 | 0.7506 | 0.7506 |
| 0.03 | 196.08 | 10000 | 1.5983 | 0.7506 | 0.7506 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:23:02+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** Mohamedshaaban2001
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Mohamedshaaban2001/llama3_4 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:23:29+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-filtered-50-0.009 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:24:21+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eli5_dir
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6991 | 1.0 | 1314 | 3.5643 |
| 3.5819 | 2.0 | 2628 | 3.5568 |
| 3.5421 | 3.0 | 3942 | 3.5573 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "eli5_dir", "results": []}]} | BohanJiang/eli5_dir | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:25:49+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2265
- Accuracy: 0.9387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2122 | 1.0 | 1563 | 0.2055 | 0.9221 |
| 0.1262 | 2.0 | 3126 | 0.2265 | 0.9387 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]} | WillXH/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:26:52+00:00 |
null | null | {} | Tennish/raj | null | [
"region:us"
]
| null | 2024-04-27T05:28:00+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/1plso1l | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:28:02+00:00 |
text-to-audio | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | procit001/female_english_voice_v1.4 | null | [
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:28:44+00:00 |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hossniper/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]} | hossniper/ppo-SnowballTarget | null | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| null | 2024-04-27T05:29:39+00:00 |
null | null | {} | KarthikSaran/orpo-phi3 | null | [
"region:us"
]
| null | 2024-04-27T05:31:19+00:00 |
|
null | null | Apakah Dozerex Tablet?
Dozerex harga ialah kapsul kesihatan lelaki berkualiti premium yang diformulasikan untuk menyokong tahap kecergasan dan tenaga. Formula termajunya menggabungkan gabungan sinergistik vitamin, mineral dan ekstrak herba, yang dipilih khusus untuk menggalakkan kesihatan dan kesejahteraan optimum pada lelaki.
Laman web rasmi:<a href="https://www.nutritionsee.com/dozermlaysi">www.Dozerex.com</a>
<p><a href="https://www.nutritionsee.com/dozermlaysi"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Dozerex-Malaysia-1.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/dozermlaysi">Beli sekarang!! Klik pautan di bawah untuk maklumat lanjut dan dapatkan diskaun 50% sekarang... Cepat</a>
Laman web rasmi:<a href="https://www.nutritionsee.com/dozermlaysi">www.Dozerex.com</a> | {"license": "apache-2.0"} | DozerexMalaysia/Dozerex | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T05:31:26+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pavanch121/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1160
- Validation Loss: 0.3811
- Train Precision: 0.5648
- Train Recall: 0.3291
- Train F1: 0.4159
- Train Accuracy: 0.9237
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3179 | 0.4210 | 0.4599 | 0.1002 | 0.1645 | 0.9054 | 0 |
| 0.1493 | 0.3804 | 0.5184 | 0.3029 | 0.3823 | 0.9203 | 1 |
| 0.1160 | 0.3811 | 0.5648 | 0.3291 | 0.4159 | 0.9237 | 2 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "pavanch121/distilbert-base-uncased-finetuned-ner", "results": []}]} | pavanch121/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:31:26+00:00 |
null | null | {} | jeongkee10/En2Ko_100k | null | [
"region:us"
]
| null | 2024-04-27T05:31:50+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - AdityaNath/Jap_Arch_LoRA
<Gallery />
## Model description
These are AdityaNath/Jap_Arch_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of Jap_Arch Architecture to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](AdityaNath/Jap_Arch_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of Jap_Arch Architecture", "widget": []} | AdityaNath/Jap_Arch_LoRA | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| null | 2024-04-27T05:31:54+00:00 |
null | null | {"license": "mit"} | usuijuice/test | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T05:32:35+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2336
- F1 Score: 0.8980
- Accuracy: 0.8980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.446 | 0.47 | 200 | 0.3149 | 0.8595 | 0.8595 |
| 0.3297 | 0.95 | 400 | 0.2748 | 0.8794 | 0.8795 |
| 0.2983 | 1.42 | 600 | 0.2600 | 0.8850 | 0.8851 |
| 0.2929 | 1.9 | 800 | 0.2536 | 0.8886 | 0.8887 |
| 0.2758 | 2.37 | 1000 | 0.2500 | 0.8900 | 0.8900 |
| 0.2654 | 2.84 | 1200 | 0.2445 | 0.8919 | 0.8921 |
| 0.2546 | 3.32 | 1400 | 0.2403 | 0.8944 | 0.8944 |
| 0.2594 | 3.79 | 1600 | 0.2435 | 0.8955 | 0.8955 |
| 0.2512 | 4.27 | 1800 | 0.2406 | 0.8974 | 0.8976 |
| 0.2462 | 4.74 | 2000 | 0.2467 | 0.8947 | 0.8947 |
| 0.2493 | 5.21 | 2200 | 0.2385 | 0.8968 | 0.8970 |
| 0.2445 | 5.69 | 2400 | 0.2371 | 0.8984 | 0.8984 |
| 0.2405 | 6.16 | 2600 | 0.2362 | 0.8963 | 0.8965 |
| 0.239 | 6.64 | 2800 | 0.2367 | 0.8971 | 0.8971 |
| 0.2406 | 7.11 | 3000 | 0.2345 | 0.8986 | 0.8986 |
| 0.2331 | 7.58 | 3200 | 0.2425 | 0.8961 | 0.8961 |
| 0.2403 | 8.06 | 3400 | 0.2270 | 0.9018 | 0.9019 |
| 0.2318 | 8.53 | 3600 | 0.2334 | 0.9011 | 0.9011 |
| 0.2378 | 9.0 | 3800 | 0.2284 | 0.9021 | 0.9021 |
| 0.2284 | 9.48 | 4000 | 0.2290 | 0.9033 | 0.9033 |
| 0.2333 | 9.95 | 4200 | 0.2279 | 0.9026 | 0.9026 |
| 0.2276 | 10.43 | 4400 | 0.2298 | 0.9020 | 0.9020 |
| 0.2266 | 10.9 | 4600 | 0.2311 | 0.9011 | 0.9011 |
| 0.2218 | 11.37 | 4800 | 0.2346 | 0.8990 | 0.8990 |
| 0.2308 | 11.85 | 5000 | 0.2291 | 0.9022 | 0.9023 |
| 0.2272 | 12.32 | 5200 | 0.2355 | 0.8962 | 0.8962 |
| 0.2263 | 12.8 | 5400 | 0.2331 | 0.9004 | 0.9004 |
| 0.2254 | 13.27 | 5600 | 0.2235 | 0.9026 | 0.9026 |
| 0.2199 | 13.74 | 5800 | 0.2265 | 0.9045 | 0.9045 |
| 0.2236 | 14.22 | 6000 | 0.2323 | 0.9010 | 0.9010 |
| 0.219 | 14.69 | 6200 | 0.2272 | 0.9063 | 0.9063 |
| 0.2209 | 15.17 | 6400 | 0.2320 | 0.9010 | 0.9010 |
| 0.2213 | 15.64 | 6600 | 0.2243 | 0.9044 | 0.9044 |
| 0.2155 | 16.11 | 6800 | 0.2260 | 0.9049 | 0.9050 |
| 0.2151 | 16.59 | 7000 | 0.2341 | 0.9007 | 0.9007 |
| 0.2209 | 17.06 | 7200 | 0.2245 | 0.9030 | 0.9030 |
| 0.2149 | 17.54 | 7400 | 0.2291 | 0.9020 | 0.9020 |
| 0.2171 | 18.01 | 7600 | 0.2228 | 0.9056 | 0.9056 |
| 0.2146 | 18.48 | 7800 | 0.2288 | 0.9033 | 0.9033 |
| 0.2202 | 18.96 | 8000 | 0.2217 | 0.9067 | 0.9067 |
| 0.2125 | 19.43 | 8200 | 0.2289 | 0.9030 | 0.9030 |
| 0.2152 | 19.91 | 8400 | 0.2247 | 0.9058 | 0.9059 |
| 0.2161 | 20.38 | 8600 | 0.2269 | 0.9029 | 0.9029 |
| 0.2133 | 20.85 | 8800 | 0.2236 | 0.9054 | 0.9054 |
| 0.2105 | 21.33 | 9000 | 0.2246 | 0.9044 | 0.9044 |
| 0.2108 | 21.8 | 9200 | 0.2271 | 0.9038 | 0.9038 |
| 0.2137 | 22.27 | 9400 | 0.2250 | 0.9045 | 0.9045 |
| 0.2097 | 22.75 | 9600 | 0.2235 | 0.9053 | 0.9053 |
| 0.2136 | 23.22 | 9800 | 0.2240 | 0.9045 | 0.9045 |
| 0.2164 | 23.7 | 10000 | 0.2241 | 0.9050 | 0.9050 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:33:25+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2331
- F1 Score: 0.9027
- Accuracy: 0.9027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4165 | 0.47 | 200 | 0.2963 | 0.8706 | 0.8706 |
| 0.3038 | 0.95 | 400 | 0.2607 | 0.8855 | 0.8855 |
| 0.2779 | 1.42 | 600 | 0.2447 | 0.8928 | 0.8928 |
| 0.2724 | 1.9 | 800 | 0.2473 | 0.8930 | 0.8930 |
| 0.2578 | 2.37 | 1000 | 0.2450 | 0.8952 | 0.8952 |
| 0.2486 | 2.84 | 1200 | 0.2324 | 0.8978 | 0.8979 |
| 0.2404 | 3.32 | 1400 | 0.2364 | 0.9021 | 0.9021 |
| 0.2443 | 3.79 | 1600 | 0.2320 | 0.9008 | 0.9008 |
| 0.2377 | 4.27 | 1800 | 0.2301 | 0.9030 | 0.9030 |
| 0.2336 | 4.74 | 2000 | 0.2416 | 0.8990 | 0.8990 |
| 0.2348 | 5.21 | 2200 | 0.2311 | 0.9018 | 0.9020 |
| 0.2306 | 5.69 | 2400 | 0.2322 | 0.9009 | 0.9010 |
| 0.2269 | 6.16 | 2600 | 0.2250 | 0.9038 | 0.9039 |
| 0.2256 | 6.64 | 2800 | 0.2328 | 0.9006 | 0.9007 |
| 0.2236 | 7.11 | 3000 | 0.2297 | 0.8999 | 0.8999 |
| 0.2151 | 7.58 | 3200 | 0.2326 | 0.9017 | 0.9017 |
| 0.2253 | 8.06 | 3400 | 0.2190 | 0.9035 | 0.9035 |
| 0.213 | 8.53 | 3600 | 0.2303 | 0.9039 | 0.9039 |
| 0.2205 | 9.0 | 3800 | 0.2221 | 0.9070 | 0.9070 |
| 0.2111 | 9.48 | 4000 | 0.2212 | 0.9048 | 0.9048 |
| 0.2136 | 9.95 | 4200 | 0.2193 | 0.9064 | 0.9064 |
| 0.2083 | 10.43 | 4400 | 0.2244 | 0.9054 | 0.9054 |
| 0.208 | 10.9 | 4600 | 0.2238 | 0.9047 | 0.9047 |
| 0.2019 | 11.37 | 4800 | 0.2229 | 0.9069 | 0.9069 |
| 0.2094 | 11.85 | 5000 | 0.2241 | 0.9063 | 0.9063 |
| 0.2044 | 12.32 | 5200 | 0.2303 | 0.9014 | 0.9014 |
| 0.2034 | 12.8 | 5400 | 0.2306 | 0.9070 | 0.9070 |
| 0.2007 | 13.27 | 5600 | 0.2203 | 0.9079 | 0.9079 |
| 0.1984 | 13.74 | 5800 | 0.2237 | 0.9069 | 0.9069 |
| 0.2013 | 14.22 | 6000 | 0.2351 | 0.9013 | 0.9013 |
| 0.1946 | 14.69 | 6200 | 0.2232 | 0.9085 | 0.9085 |
| 0.1978 | 15.17 | 6400 | 0.2263 | 0.9057 | 0.9057 |
| 0.1959 | 15.64 | 6600 | 0.2242 | 0.9064 | 0.9064 |
| 0.1917 | 16.11 | 6800 | 0.2255 | 0.9061 | 0.9062 |
| 0.1874 | 16.59 | 7000 | 0.2316 | 0.9045 | 0.9045 |
| 0.1962 | 17.06 | 7200 | 0.2231 | 0.9076 | 0.9076 |
| 0.1867 | 17.54 | 7400 | 0.2283 | 0.9063 | 0.9063 |
| 0.1898 | 18.01 | 7600 | 0.2215 | 0.9079 | 0.9079 |
| 0.1861 | 18.48 | 7800 | 0.2292 | 0.9039 | 0.9039 |
| 0.1913 | 18.96 | 8000 | 0.2219 | 0.9082 | 0.9082 |
| 0.1844 | 19.43 | 8200 | 0.2305 | 0.9042 | 0.9042 |
| 0.1883 | 19.91 | 8400 | 0.2268 | 0.9073 | 0.9073 |
| 0.1852 | 20.38 | 8600 | 0.2343 | 0.9038 | 0.9038 |
| 0.1831 | 20.85 | 8800 | 0.2269 | 0.9079 | 0.9079 |
| 0.1816 | 21.33 | 9000 | 0.2298 | 0.9036 | 0.9036 |
| 0.1808 | 21.8 | 9200 | 0.2305 | 0.9030 | 0.9030 |
| 0.1833 | 22.27 | 9400 | 0.2284 | 0.9045 | 0.9045 |
| 0.1769 | 22.75 | 9600 | 0.2287 | 0.9070 | 0.9070 |
| 0.1815 | 23.22 | 9800 | 0.2289 | 0.9064 | 0.9064 |
| 0.1843 | 23.7 | 10000 | 0.2288 | 0.9048 | 0.9048 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:33:49+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5717
- F1 Score: 0.7021
- Accuracy: 0.7021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6553 | 1.69 | 200 | 0.6203 | 0.6413 | 0.6421 |
| 0.6235 | 3.39 | 400 | 0.6047 | 0.6544 | 0.6543 |
| 0.6066 | 5.08 | 600 | 0.5917 | 0.6692 | 0.6691 |
| 0.5969 | 6.78 | 800 | 0.5804 | 0.6836 | 0.6835 |
| 0.5883 | 8.47 | 1000 | 0.5717 | 0.6856 | 0.6856 |
| 0.5805 | 10.17 | 1200 | 0.5665 | 0.6989 | 0.6989 |
| 0.5744 | 11.86 | 1400 | 0.5588 | 0.7068 | 0.7074 |
| 0.5687 | 13.56 | 1600 | 0.5531 | 0.7111 | 0.7111 |
| 0.5621 | 15.25 | 1800 | 0.5536 | 0.7169 | 0.7175 |
| 0.5579 | 16.95 | 2000 | 0.5514 | 0.7116 | 0.7122 |
| 0.555 | 18.64 | 2200 | 0.5498 | 0.7143 | 0.7148 |
| 0.554 | 20.34 | 2400 | 0.5472 | 0.7173 | 0.7175 |
| 0.5522 | 22.03 | 2600 | 0.5602 | 0.7036 | 0.7063 |
| 0.5492 | 23.73 | 2800 | 0.5442 | 0.7234 | 0.7233 |
| 0.5455 | 25.42 | 3000 | 0.5447 | 0.7194 | 0.7196 |
| 0.5446 | 27.12 | 3200 | 0.5541 | 0.7038 | 0.7063 |
| 0.5418 | 28.81 | 3400 | 0.5449 | 0.7240 | 0.7244 |
| 0.5385 | 30.51 | 3600 | 0.5404 | 0.7277 | 0.7276 |
| 0.5376 | 32.2 | 3800 | 0.5398 | 0.7313 | 0.7313 |
| 0.538 | 33.9 | 4000 | 0.5468 | 0.7242 | 0.7249 |
| 0.5312 | 35.59 | 4200 | 0.5471 | 0.7261 | 0.7265 |
| 0.5362 | 37.29 | 4400 | 0.5402 | 0.7313 | 0.7313 |
| 0.5308 | 38.98 | 4600 | 0.5377 | 0.7287 | 0.7286 |
| 0.5299 | 40.68 | 4800 | 0.5457 | 0.7234 | 0.7244 |
| 0.5245 | 42.37 | 5000 | 0.5421 | 0.7348 | 0.7350 |
| 0.5284 | 44.07 | 5200 | 0.5382 | 0.7398 | 0.7398 |
| 0.5243 | 45.76 | 5400 | 0.5384 | 0.7342 | 0.7345 |
| 0.5236 | 47.46 | 5600 | 0.5374 | 0.7393 | 0.7392 |
| 0.5267 | 49.15 | 5800 | 0.5378 | 0.7351 | 0.7355 |
| 0.5217 | 50.85 | 6000 | 0.5371 | 0.7332 | 0.7334 |
| 0.5249 | 52.54 | 6200 | 0.5338 | 0.7382 | 0.7382 |
| 0.5209 | 54.24 | 6400 | 0.5371 | 0.7327 | 0.7329 |
| 0.5222 | 55.93 | 6600 | 0.5350 | 0.7387 | 0.7387 |
| 0.5191 | 57.63 | 6800 | 0.5358 | 0.7388 | 0.7387 |
| 0.519 | 59.32 | 7000 | 0.5411 | 0.7307 | 0.7313 |
| 0.5174 | 61.02 | 7200 | 0.5345 | 0.7409 | 0.7408 |
| 0.5175 | 62.71 | 7400 | 0.5361 | 0.7382 | 0.7382 |
| 0.5162 | 64.41 | 7600 | 0.5360 | 0.7327 | 0.7329 |
| 0.5175 | 66.1 | 7800 | 0.5352 | 0.7317 | 0.7318 |
| 0.5172 | 67.8 | 8000 | 0.5342 | 0.7350 | 0.7350 |
| 0.5136 | 69.49 | 8200 | 0.5342 | 0.7340 | 0.7339 |
| 0.5157 | 71.19 | 8400 | 0.5347 | 0.7349 | 0.7350 |
| 0.5145 | 72.88 | 8600 | 0.5341 | 0.7388 | 0.7387 |
| 0.5138 | 74.58 | 8800 | 0.5362 | 0.7348 | 0.7350 |
| 0.5118 | 76.27 | 9000 | 0.5353 | 0.7360 | 0.7361 |
| 0.5148 | 77.97 | 9200 | 0.5372 | 0.7316 | 0.7318 |
| 0.5127 | 79.66 | 9400 | 0.5351 | 0.7361 | 0.7361 |
| 0.5109 | 81.36 | 9600 | 0.5358 | 0.7338 | 0.7339 |
| 0.5141 | 83.05 | 9800 | 0.5353 | 0.7355 | 0.7355 |
| 0.51 | 84.75 | 10000 | 0.5356 | 0.7338 | 0.7339 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:34:15+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6126
- F1 Score: 0.6998
- Accuracy: 0.6999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6406 | 1.69 | 200 | 0.6052 | 0.6636 | 0.6654 |
| 0.6016 | 3.39 | 400 | 0.5785 | 0.6849 | 0.6856 |
| 0.5745 | 5.08 | 600 | 0.5611 | 0.7072 | 0.7095 |
| 0.5629 | 6.78 | 800 | 0.5499 | 0.7169 | 0.7175 |
| 0.5536 | 8.47 | 1000 | 0.5510 | 0.7174 | 0.7185 |
| 0.5444 | 10.17 | 1200 | 0.5478 | 0.7220 | 0.7228 |
| 0.5396 | 11.86 | 1400 | 0.5411 | 0.7296 | 0.7297 |
| 0.5321 | 13.56 | 1600 | 0.5419 | 0.7308 | 0.7313 |
| 0.5251 | 15.25 | 1800 | 0.5469 | 0.7247 | 0.7254 |
| 0.5194 | 16.95 | 2000 | 0.5464 | 0.7303 | 0.7318 |
| 0.5131 | 18.64 | 2200 | 0.5617 | 0.7197 | 0.7233 |
| 0.5114 | 20.34 | 2400 | 0.5442 | 0.7282 | 0.7281 |
| 0.5074 | 22.03 | 2600 | 0.5555 | 0.7256 | 0.7265 |
| 0.4998 | 23.73 | 2800 | 0.5419 | 0.7308 | 0.7307 |
| 0.4942 | 25.42 | 3000 | 0.5530 | 0.7242 | 0.7254 |
| 0.4927 | 27.12 | 3200 | 0.5530 | 0.7265 | 0.7270 |
| 0.4861 | 28.81 | 3400 | 0.5565 | 0.7246 | 0.7249 |
| 0.481 | 30.51 | 3600 | 0.5561 | 0.7266 | 0.7265 |
| 0.479 | 32.2 | 3800 | 0.5578 | 0.7290 | 0.7292 |
| 0.4805 | 33.9 | 4000 | 0.5657 | 0.7225 | 0.7228 |
| 0.4664 | 35.59 | 4200 | 0.5717 | 0.7165 | 0.7175 |
| 0.4697 | 37.29 | 4400 | 0.5633 | 0.7248 | 0.7249 |
| 0.4618 | 38.98 | 4600 | 0.5758 | 0.7346 | 0.7350 |
| 0.4588 | 40.68 | 4800 | 0.5711 | 0.7144 | 0.7153 |
| 0.4515 | 42.37 | 5000 | 0.5816 | 0.7250 | 0.7249 |
| 0.4543 | 44.07 | 5200 | 0.5856 | 0.7201 | 0.7201 |
| 0.4511 | 45.76 | 5400 | 0.5703 | 0.7215 | 0.7217 |
| 0.4462 | 47.46 | 5600 | 0.5776 | 0.7287 | 0.7286 |
| 0.4482 | 49.15 | 5800 | 0.5725 | 0.7174 | 0.7180 |
| 0.4399 | 50.85 | 6000 | 0.5715 | 0.7314 | 0.7313 |
| 0.4409 | 52.54 | 6200 | 0.5766 | 0.7381 | 0.7382 |
| 0.4337 | 54.24 | 6400 | 0.5738 | 0.7198 | 0.7201 |
| 0.4332 | 55.93 | 6600 | 0.5786 | 0.7249 | 0.7249 |
| 0.4295 | 57.63 | 6800 | 0.5863 | 0.7271 | 0.7270 |
| 0.4284 | 59.32 | 7000 | 0.5902 | 0.7162 | 0.7164 |
| 0.4261 | 61.02 | 7200 | 0.5840 | 0.7228 | 0.7228 |
| 0.4232 | 62.71 | 7400 | 0.5878 | 0.7345 | 0.7345 |
| 0.4201 | 64.41 | 7600 | 0.5917 | 0.7266 | 0.7265 |
| 0.4209 | 66.1 | 7800 | 0.5925 | 0.7254 | 0.7254 |
| 0.4204 | 67.8 | 8000 | 0.5818 | 0.7282 | 0.7281 |
| 0.414 | 69.49 | 8200 | 0.5877 | 0.7298 | 0.7297 |
| 0.4171 | 71.19 | 8400 | 0.5855 | 0.7335 | 0.7334 |
| 0.4147 | 72.88 | 8600 | 0.5864 | 0.7330 | 0.7329 |
| 0.4123 | 74.58 | 8800 | 0.5875 | 0.7260 | 0.7260 |
| 0.4137 | 76.27 | 9000 | 0.5882 | 0.7302 | 0.7302 |
| 0.4089 | 77.97 | 9200 | 0.5970 | 0.7270 | 0.7270 |
| 0.4101 | 79.66 | 9400 | 0.5938 | 0.7282 | 0.7281 |
| 0.4052 | 81.36 | 9600 | 0.5939 | 0.7270 | 0.7270 |
| 0.4093 | 83.05 | 9800 | 0.5921 | 0.7265 | 0.7265 |
| 0.4066 | 84.75 | 10000 | 0.5929 | 0.7281 | 0.7281 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:36:09+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5704
- F1 Score: 0.7011
- Accuracy: 0.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6317 | 1.69 | 200 | 0.5983 | 0.6722 | 0.6760 |
| 0.5856 | 3.39 | 400 | 0.5636 | 0.6984 | 0.6984 |
| 0.5537 | 5.08 | 600 | 0.5532 | 0.7112 | 0.7138 |
| 0.5382 | 6.78 | 800 | 0.5450 | 0.7386 | 0.7387 |
| 0.5252 | 8.47 | 1000 | 0.5479 | 0.7290 | 0.7297 |
| 0.5047 | 10.17 | 1200 | 0.5433 | 0.7203 | 0.7207 |
| 0.4949 | 11.86 | 1400 | 0.5478 | 0.7263 | 0.7270 |
| 0.4789 | 13.56 | 1600 | 0.5500 | 0.7245 | 0.7249 |
| 0.4638 | 15.25 | 1800 | 0.5529 | 0.7276 | 0.7276 |
| 0.4478 | 16.95 | 2000 | 0.5669 | 0.7104 | 0.7116 |
| 0.432 | 18.64 | 2200 | 0.5694 | 0.7255 | 0.7260 |
| 0.422 | 20.34 | 2400 | 0.5838 | 0.7282 | 0.7281 |
| 0.4084 | 22.03 | 2600 | 0.5957 | 0.7314 | 0.7313 |
| 0.3935 | 23.73 | 2800 | 0.5820 | 0.7313 | 0.7313 |
| 0.382 | 25.42 | 3000 | 0.6444 | 0.7235 | 0.7249 |
| 0.3741 | 27.12 | 3200 | 0.6335 | 0.7254 | 0.7254 |
| 0.3597 | 28.81 | 3400 | 0.6612 | 0.7186 | 0.7185 |
| 0.3444 | 30.51 | 3600 | 0.6478 | 0.7213 | 0.7212 |
| 0.3428 | 32.2 | 3800 | 0.6803 | 0.7223 | 0.7223 |
| 0.3379 | 33.9 | 4000 | 0.6703 | 0.7168 | 0.7169 |
| 0.312 | 35.59 | 4200 | 0.7018 | 0.7139 | 0.7143 |
| 0.3171 | 37.29 | 4400 | 0.6989 | 0.7212 | 0.7212 |
| 0.2973 | 38.98 | 4600 | 0.7242 | 0.7190 | 0.7191 |
| 0.2929 | 40.68 | 4800 | 0.7338 | 0.7101 | 0.7100 |
| 0.2837 | 42.37 | 5000 | 0.7864 | 0.7176 | 0.7175 |
| 0.2818 | 44.07 | 5200 | 0.7733 | 0.7181 | 0.7180 |
| 0.2745 | 45.76 | 5400 | 0.7912 | 0.7123 | 0.7122 |
| 0.2673 | 47.46 | 5600 | 0.8100 | 0.7235 | 0.7244 |
| 0.2611 | 49.15 | 5800 | 0.7809 | 0.7117 | 0.7116 |
| 0.2597 | 50.85 | 6000 | 0.7785 | 0.7138 | 0.7138 |
| 0.2481 | 52.54 | 6200 | 0.8297 | 0.7132 | 0.7132 |
| 0.2423 | 54.24 | 6400 | 0.8508 | 0.7016 | 0.7015 |
| 0.2402 | 55.93 | 6600 | 0.8418 | 0.7085 | 0.7084 |
| 0.2325 | 57.63 | 6800 | 0.8314 | 0.7112 | 0.7111 |
| 0.2315 | 59.32 | 7000 | 0.8885 | 0.7117 | 0.7116 |
| 0.2254 | 61.02 | 7200 | 0.8921 | 0.7074 | 0.7074 |
| 0.2231 | 62.71 | 7400 | 0.9142 | 0.7184 | 0.7185 |
| 0.2159 | 64.41 | 7600 | 0.9128 | 0.7105 | 0.7111 |
| 0.2149 | 66.1 | 7800 | 0.9018 | 0.7139 | 0.7138 |
| 0.2137 | 67.8 | 8000 | 0.9168 | 0.7043 | 0.7042 |
| 0.2092 | 69.49 | 8200 | 0.9040 | 0.7135 | 0.7138 |
| 0.2042 | 71.19 | 8400 | 0.9157 | 0.7102 | 0.7106 |
| 0.2061 | 72.88 | 8600 | 0.8987 | 0.7109 | 0.7111 |
| 0.2004 | 74.58 | 8800 | 0.9239 | 0.7089 | 0.7090 |
| 0.202 | 76.27 | 9000 | 0.9158 | 0.7095 | 0.7095 |
| 0.1969 | 77.97 | 9200 | 0.9263 | 0.7048 | 0.7047 |
| 0.1947 | 79.66 | 9400 | 0.9382 | 0.7039 | 0.7042 |
| 0.1934 | 81.36 | 9600 | 0.9429 | 0.7052 | 0.7053 |
| 0.1914 | 83.05 | 9800 | 0.9465 | 0.7024 | 0.7026 |
| 0.192 | 84.75 | 10000 | 0.9481 | 0.7029 | 0.7031 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:36:29+00:00 |
automatic-speech-recognition | transformers | {} | ddddd3424/whisper-small-zh-HK | null | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:38:04+00:00 |
|
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-s-201
This model is a fine-tuned version of [facebook/dinov2-small-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-small-imagenet1k-1-layer) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5503
- Accuracy: 0.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.7244 | 0.2195 |
| 1.4057 | 2.0 | 10 | 1.1285 | 0.5122 |
| 1.4057 | 3.0 | 15 | 0.6513 | 0.7561 |
| 0.8392 | 4.0 | 20 | 0.5946 | 0.8049 |
| 0.8392 | 5.0 | 25 | 0.6221 | 0.8293 |
| 0.6571 | 6.0 | 30 | 1.3668 | 0.4878 |
| 0.6571 | 7.0 | 35 | 0.6909 | 0.6585 |
| 0.7314 | 8.0 | 40 | 0.6185 | 0.7073 |
| 0.7314 | 9.0 | 45 | 1.1204 | 0.5122 |
| 0.6679 | 10.0 | 50 | 0.6920 | 0.7073 |
| 0.6679 | 11.0 | 55 | 0.5515 | 0.7561 |
| 0.5023 | 12.0 | 60 | 0.8328 | 0.6829 |
| 0.5023 | 13.0 | 65 | 0.5849 | 0.7805 |
| 0.5507 | 14.0 | 70 | 0.4574 | 0.8293 |
| 0.5507 | 15.0 | 75 | 0.7229 | 0.7317 |
| 0.4605 | 16.0 | 80 | 0.6463 | 0.6829 |
| 0.4605 | 17.0 | 85 | 0.5158 | 0.7805 |
| 0.3592 | 18.0 | 90 | 0.5429 | 0.7317 |
| 0.3592 | 19.0 | 95 | 0.4544 | 0.8293 |
| 0.3719 | 20.0 | 100 | 0.5683 | 0.7805 |
| 0.3719 | 21.0 | 105 | 0.7423 | 0.7073 |
| 0.4792 | 22.0 | 110 | 0.6053 | 0.7561 |
| 0.4792 | 23.0 | 115 | 0.5218 | 0.8049 |
| 0.3421 | 24.0 | 120 | 0.5553 | 0.8049 |
| 0.3421 | 25.0 | 125 | 0.6367 | 0.7805 |
| 0.3528 | 26.0 | 130 | 0.3843 | 0.8049 |
| 0.3528 | 27.0 | 135 | 0.6923 | 0.7317 |
| 0.3335 | 28.0 | 140 | 0.6799 | 0.7073 |
| 0.3335 | 29.0 | 145 | 1.0437 | 0.6098 |
| 0.2933 | 30.0 | 150 | 0.8362 | 0.7073 |
| 0.2933 | 31.0 | 155 | 0.6174 | 0.7073 |
| 0.2902 | 32.0 | 160 | 0.5487 | 0.8780 |
| 0.2902 | 33.0 | 165 | 0.6631 | 0.8049 |
| 0.3046 | 34.0 | 170 | 0.7015 | 0.7561 |
| 0.3046 | 35.0 | 175 | 0.5250 | 0.8049 |
| 0.2355 | 36.0 | 180 | 0.6684 | 0.8537 |
| 0.2355 | 37.0 | 185 | 0.5820 | 0.7805 |
| 0.21 | 38.0 | 190 | 0.7903 | 0.7805 |
| 0.21 | 39.0 | 195 | 0.4358 | 0.9024 |
| 0.1833 | 40.0 | 200 | 0.8039 | 0.8293 |
| 0.1833 | 41.0 | 205 | 0.6242 | 0.8537 |
| 0.2227 | 42.0 | 210 | 0.7574 | 0.7073 |
| 0.2227 | 43.0 | 215 | 0.8873 | 0.7561 |
| 0.1831 | 44.0 | 220 | 0.9501 | 0.7561 |
| 0.1831 | 45.0 | 225 | 0.8774 | 0.8293 |
| 0.1815 | 46.0 | 230 | 0.7826 | 0.8049 |
| 0.1815 | 47.0 | 235 | 1.1516 | 0.6829 |
| 0.1615 | 48.0 | 240 | 0.6514 | 0.8537 |
| 0.1615 | 49.0 | 245 | 0.5799 | 0.8049 |
| 0.1381 | 50.0 | 250 | 0.7545 | 0.7805 |
| 0.1381 | 51.0 | 255 | 0.5452 | 0.8049 |
| 0.1462 | 52.0 | 260 | 0.7610 | 0.8049 |
| 0.1462 | 53.0 | 265 | 0.7827 | 0.8049 |
| 0.1096 | 54.0 | 270 | 0.6393 | 0.8537 |
| 0.1096 | 55.0 | 275 | 0.5902 | 0.8293 |
| 0.0914 | 56.0 | 280 | 0.7998 | 0.8537 |
| 0.0914 | 57.0 | 285 | 0.9032 | 0.7805 |
| 0.1674 | 58.0 | 290 | 0.5467 | 0.8537 |
| 0.1674 | 59.0 | 295 | 0.9872 | 0.7805 |
| 0.086 | 60.0 | 300 | 0.6481 | 0.8537 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/dinov2-small-imagenet1k-1-layer", "model-index": [{"name": "dinov2-s-201", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8048780487804879, "name": "Accuracy"}]}]}]} | niraj003/dinov2-s-201 | null | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/dinov2-small-imagenet1k-1-layer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:38:05+00:00 |
null | null | {"license": "mit"} | whacker/testserimllm | null | [
"license:mit",
"region:us"
]
| null | 2024-04-27T05:40:01+00:00 |
|
null | null | {} | Ricardolpa/Freefire | null | [
"region:us"
]
| null | 2024-04-27T05:40:22+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/56fpct9 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:40:45+00:00 |
text-generation | transformers | {} | Rimyy/Llama-2-7b-chat-finetuneGSMdata | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:42:20+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5379
- F1 Score: 0.8535
- Accuracy: 0.8536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.613 | 13.33 | 200 | 0.5346 | 0.7193 | 0.7197 |
| 0.5019 | 26.67 | 400 | 0.4513 | 0.7947 | 0.7950 |
| 0.4154 | 40.0 | 600 | 0.3783 | 0.8451 | 0.8452 |
| 0.348 | 53.33 | 800 | 0.3802 | 0.8452 | 0.8452 |
| 0.2999 | 66.67 | 1000 | 0.3966 | 0.8367 | 0.8368 |
| 0.2716 | 80.0 | 1200 | 0.4111 | 0.8452 | 0.8452 |
| 0.2493 | 93.33 | 1400 | 0.4071 | 0.8494 | 0.8494 |
| 0.2272 | 106.67 | 1600 | 0.4158 | 0.8536 | 0.8536 |
| 0.2063 | 120.0 | 1800 | 0.4486 | 0.8577 | 0.8577 |
| 0.1976 | 133.33 | 2000 | 0.4577 | 0.8703 | 0.8703 |
| 0.1834 | 146.67 | 2200 | 0.4825 | 0.8410 | 0.8410 |
| 0.1666 | 160.0 | 2400 | 0.5210 | 0.8242 | 0.8243 |
| 0.1606 | 173.33 | 2600 | 0.5225 | 0.8492 | 0.8494 |
| 0.1521 | 186.67 | 2800 | 0.5313 | 0.8452 | 0.8452 |
| 0.1472 | 200.0 | 3000 | 0.5453 | 0.8410 | 0.8410 |
| 0.1404 | 213.33 | 3200 | 0.5693 | 0.8367 | 0.8368 |
| 0.1352 | 226.67 | 3400 | 0.5634 | 0.8368 | 0.8368 |
| 0.1282 | 240.0 | 3600 | 0.5961 | 0.8241 | 0.8243 |
| 0.1208 | 253.33 | 3800 | 0.6403 | 0.8240 | 0.8243 |
| 0.1195 | 266.67 | 4000 | 0.6082 | 0.8200 | 0.8201 |
| 0.1112 | 280.0 | 4200 | 0.6709 | 0.8284 | 0.8285 |
| 0.1079 | 293.33 | 4400 | 0.6780 | 0.8284 | 0.8285 |
| 0.1079 | 306.67 | 4600 | 0.6618 | 0.8408 | 0.8410 |
| 0.1052 | 320.0 | 4800 | 0.6600 | 0.8409 | 0.8410 |
| 0.1008 | 333.33 | 5000 | 0.6764 | 0.8452 | 0.8452 |
| 0.0994 | 346.67 | 5200 | 0.7030 | 0.8284 | 0.8285 |
| 0.0993 | 360.0 | 5400 | 0.6886 | 0.8243 | 0.8243 |
| 0.097 | 373.33 | 5600 | 0.6909 | 0.8326 | 0.8326 |
| 0.0938 | 386.67 | 5800 | 0.6842 | 0.8326 | 0.8326 |
| 0.0871 | 400.0 | 6000 | 0.7277 | 0.8326 | 0.8326 |
| 0.0864 | 413.33 | 6200 | 0.7443 | 0.8368 | 0.8368 |
| 0.088 | 426.67 | 6400 | 0.7257 | 0.8368 | 0.8368 |
| 0.0883 | 440.0 | 6600 | 0.7210 | 0.8326 | 0.8326 |
| 0.085 | 453.33 | 6800 | 0.7380 | 0.8240 | 0.8243 |
| 0.0853 | 466.67 | 7000 | 0.7352 | 0.8198 | 0.8201 |
| 0.0793 | 480.0 | 7200 | 0.7687 | 0.8201 | 0.8201 |
| 0.082 | 493.33 | 7400 | 0.7717 | 0.8284 | 0.8285 |
| 0.0776 | 506.67 | 7600 | 0.7794 | 0.8159 | 0.8159 |
| 0.08 | 520.0 | 7800 | 0.7773 | 0.8284 | 0.8285 |
| 0.0803 | 533.33 | 8000 | 0.7670 | 0.8200 | 0.8201 |
| 0.0816 | 546.67 | 8200 | 0.7660 | 0.8241 | 0.8243 |
| 0.0768 | 560.0 | 8400 | 0.7663 | 0.8284 | 0.8285 |
| 0.0805 | 573.33 | 8600 | 0.7833 | 0.8201 | 0.8201 |
| 0.0748 | 586.67 | 8800 | 0.7937 | 0.8326 | 0.8326 |
| 0.0753 | 600.0 | 9000 | 0.7866 | 0.8241 | 0.8243 |
| 0.0748 | 613.33 | 9200 | 0.7897 | 0.8326 | 0.8326 |
| 0.0736 | 626.67 | 9400 | 0.7886 | 0.8326 | 0.8326 |
| 0.0742 | 640.0 | 9600 | 0.7887 | 0.8326 | 0.8326 |
| 0.0759 | 653.33 | 9800 | 0.7869 | 0.8326 | 0.8326 |
| 0.0727 | 666.67 | 10000 | 0.7866 | 0.8368 | 0.8368 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:42:35+00:00 |
null | null |
LIne2ColorID Lora for sd 1.5!
This is experimental Lora, which generates images in a similar style to Color ID.
Because it was trained with anime images, it doesn't work well with photorealistic models.
You can use it in conjunction with Lineart Controlnet. Add the following prompts
black background, colorid, green hair, blue cloth, red skin, orange face, yellow eyes
Hair: Green, Skin: Red, Clothes: Blue, Eyes: Yellow, Face: Orange

<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/L5L_CZDgc2rpatYyLzJZ8.mp4"></video>
| {} | toyxyz/LIne2ColorID | null | [
"region:us"
]
| null | 2024-04-27T05:42:47+00:00 |
null | null | {} | ahmedheakl/sythsql-llama3-v3-55600 | null | [
"tensorboard",
"safetensors",
"region:us"
]
| null | 2024-04-27T05:43:08+00:00 |
|
text-generation | transformers |
# llama-3-8b-instruct-262k-chinese
llama-3-8b-instruct-262k-chinese基于[Llama-3-8B-Instruct-262k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k),使用ORPO方法,在中英文偏好数据集[shibing624/DPO-En-Zh-20k-Preference](https://huggingface.co/datasets/shibing624/DPO-En-Zh-20k-Preference)
上微调得到的对话模型。
模型的部署、训练等方法详见MedicalGPT的GitHub仓库:[https://github.com/shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
## Relate models
- 完整模型权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese
- lora权重:https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese-lora
## Features
模型优势:
1. 支持超长context length 262k token,适合RAG
2. 支持中英文
3. 支持多轮对话,代码编码、推理能力强,英文知识充分
4. 模型推理需要显存:
Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens
-- | -- | --
FP16/BF16 | 18.66GB | 24.58GB
Int4 | 9.21GB | 14.62GB
缺点:
1. model size只有8B,知识类问答幻觉明显
2. 中文知识欠缺,容易幻觉,特别是中文古文知识,属于llama类模型通病
## 如何使用
```python
import transformers
import torch
model_id = "shibing624/llama-3-8b-instruct-262k-chinese"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.float16},
device="cuda",
)
messages = [{"role": "system", "content": ""}]
messages.append({"role": "user", "content": "介绍一下机器学习"})
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
content = outputs[0]["generated_text"][len(prompt):]
print(content)
```
result:
```shell
机器学习(Machine Learning)是一种基于计算机算法的自动数据分析技术,用于从数据中学习并预测未来的结果。它是人工智能(AI)和数据挖掘(Data Mining)的子领域,旨在通过训练和调整算法来发现数据中的模式、关系和规律。
机器学习算法可以分为监督学习、无监督学习和半监督学习三类:
1. 监督学习(Supervised Learning):在这种类型的学习中,算法被提供带有标签的数据集,用于训练。算法学习如何将输入数据映射到输出数据,并在新数据上进行预测。常见的监督学习算法包括逻辑回归、决策树、支持向量机(SVM)、随机森林和神经网络。
2. 无监督学习(Unsupervised Learning):在这种类型的学习中,算法没有标签数据。算法学习数据中的模式、结构和关系,并可能发现新的数据集群或特征。常见的无监督学习算法包括聚类、主成分分析(PCA)、独立成分分析(ICA)和高维度数据降维。
3. 半监督学习(Semi-supervised Learning):在这种类型的学习中,算法被提供部分带有标签的数据集。算法学习如何将输入数据映射到输出数据,并在新数据上进行预测。半监督学习算法结合了监督学习和无监督学习的优点,常见的半监督学习算法包括自我标注(Self-Labeling)和基于图的半监督学习(Graph-based Semi-supervised Learning)。
机器学习的应用广泛,包括自然语言处理、计算机视觉、推荐系统、人工智能和自动驾驶等领域。它的优势包括:
1. 自动化:机器学习算法可以自动从数据中发现模式和关系,无需人为干预。
2. 高效性:机器学习算法可以处理大量数据,并且可以在不需要人为干预的情况下进行预测。
3. 适应性:机器学习算法可以根据数据集的变化和更新进行调整。
4. 精准性:机器学习算法可以通过训练和测试来提高预测的准确性。
```
## train detail
train loss:
<img src="https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese/raw/main/train_lossv2.svg" width="600">
eval loss:
<img src="https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese/raw/main/eval_lossv2.svg" width="600">
# About Llama-3-8B-Instruct-262k
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model.
This model extends LLama-3 8B's context length from 8k to -> 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta.
<img src="https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/hiHWva3CbsrnPvZTp5-lu.png" width="600">
**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique
- Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
**Progressive Training Details:**
| Parameter | 65K | 262K |
|-----------------------------|----------------|------------|
| Initialize From | LLaMA-3-8B-Inst| 65K |
| Sequence Length | 2^16 | 2^18 |
| RoPE theta | 15.3 M | 207.1 M |
| Batch Size (Tokens / Step) | 2.097 M | 4.192 M |
| Steps | 30 | 24 |
| Total Tokens | 63 M | 101 M |
| Learning Rate | 2.00E-05 | 2.00E-05 |
| # GPUs | 32 | 32 |
| GPU Type | NVIDIA L40S | NVIDIA L40S|
| {"language": ["zh", "en"], "license": "other", "tags": ["llama3", "chinese", "meta"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | shibing624/llama-3-8b-instruct-262k-chinese | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"chinese",
"meta",
"conversational",
"zh",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:44:17+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1473
- F1 Score: 0.8409
- Accuracy: 0.8410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4803 | 13.33 | 200 | 0.3543 | 0.8368 | 0.8368 |
| 0.2249 | 26.67 | 400 | 0.4923 | 0.8410 | 0.8410 |
| 0.1241 | 40.0 | 600 | 0.7904 | 0.7980 | 0.7992 |
| 0.0702 | 53.33 | 800 | 0.9285 | 0.8074 | 0.8075 |
| 0.0509 | 66.67 | 1000 | 0.8517 | 0.8152 | 0.8159 |
| 0.0349 | 80.0 | 1200 | 0.9121 | 0.8242 | 0.8243 |
| 0.0262 | 93.33 | 1400 | 0.9590 | 0.8243 | 0.8243 |
| 0.0264 | 106.67 | 1600 | 0.9886 | 0.8410 | 0.8410 |
| 0.0177 | 120.0 | 1800 | 1.0063 | 0.8284 | 0.8285 |
| 0.013 | 133.33 | 2000 | 1.2040 | 0.8368 | 0.8368 |
| 0.0162 | 146.67 | 2200 | 1.1041 | 0.8533 | 0.8536 |
| 0.013 | 160.0 | 2400 | 1.2578 | 0.8159 | 0.8159 |
| 0.0138 | 173.33 | 2600 | 0.9836 | 0.8452 | 0.8452 |
| 0.0093 | 186.67 | 2800 | 1.1183 | 0.8368 | 0.8368 |
| 0.0101 | 200.0 | 3000 | 1.0961 | 0.8452 | 0.8452 |
| 0.0111 | 213.33 | 3200 | 0.9007 | 0.8577 | 0.8577 |
| 0.0094 | 226.67 | 3400 | 1.0733 | 0.8408 | 0.8410 |
| 0.0103 | 240.0 | 3600 | 1.0371 | 0.8243 | 0.8243 |
| 0.0042 | 253.33 | 3800 | 1.1633 | 0.8368 | 0.8368 |
| 0.009 | 266.67 | 4000 | 1.0699 | 0.8452 | 0.8452 |
| 0.0073 | 280.0 | 4200 | 1.1294 | 0.8450 | 0.8452 |
| 0.0053 | 293.33 | 4400 | 1.3100 | 0.8452 | 0.8452 |
| 0.005 | 306.67 | 4600 | 1.2680 | 0.8408 | 0.8410 |
| 0.0064 | 320.0 | 4800 | 1.0098 | 0.8493 | 0.8494 |
| 0.0048 | 333.33 | 5000 | 1.2811 | 0.8450 | 0.8452 |
| 0.0039 | 346.67 | 5200 | 1.3538 | 0.8284 | 0.8285 |
| 0.0056 | 360.0 | 5400 | 1.3837 | 0.8367 | 0.8368 |
| 0.0034 | 373.33 | 5600 | 1.5433 | 0.8198 | 0.8201 |
| 0.004 | 386.67 | 5800 | 1.3904 | 0.8284 | 0.8285 |
| 0.0033 | 400.0 | 6000 | 1.3728 | 0.8075 | 0.8075 |
| 0.0045 | 413.33 | 6200 | 1.4619 | 0.8367 | 0.8368 |
| 0.0044 | 426.67 | 6400 | 1.2779 | 0.8285 | 0.8285 |
| 0.0027 | 440.0 | 6600 | 1.2879 | 0.8324 | 0.8326 |
| 0.0033 | 453.33 | 6800 | 1.2179 | 0.8494 | 0.8494 |
| 0.0015 | 466.67 | 7000 | 1.3028 | 0.8280 | 0.8285 |
| 0.0026 | 480.0 | 7200 | 1.3398 | 0.8280 | 0.8285 |
| 0.002 | 493.33 | 7400 | 1.2803 | 0.8452 | 0.8452 |
| 0.0014 | 506.67 | 7600 | 1.3104 | 0.8408 | 0.8410 |
| 0.003 | 520.0 | 7800 | 1.3562 | 0.8451 | 0.8452 |
| 0.0021 | 533.33 | 8000 | 1.3905 | 0.8243 | 0.8243 |
| 0.0018 | 546.67 | 8200 | 1.4232 | 0.8285 | 0.8285 |
| 0.0016 | 560.0 | 8400 | 1.4825 | 0.8280 | 0.8285 |
| 0.0021 | 573.33 | 8600 | 1.3714 | 0.8451 | 0.8452 |
| 0.0019 | 586.67 | 8800 | 1.4865 | 0.8325 | 0.8326 |
| 0.0023 | 600.0 | 9000 | 1.3422 | 0.8326 | 0.8326 |
| 0.0019 | 613.33 | 9200 | 1.3684 | 0.8368 | 0.8368 |
| 0.0009 | 626.67 | 9400 | 1.4483 | 0.8326 | 0.8326 |
| 0.0011 | 640.0 | 9600 | 1.4090 | 0.8410 | 0.8410 |
| 0.0012 | 653.33 | 9800 | 1.4079 | 0.8451 | 0.8452 |
| 0.0008 | 666.67 | 10000 | 1.4164 | 0.8451 | 0.8452 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:44:40+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4271
- F1 Score: 0.8532
- Accuracy: 0.8536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5526 | 13.33 | 200 | 0.3903 | 0.8075 | 0.8075 |
| 0.3247 | 26.67 | 400 | 0.3813 | 0.8493 | 0.8494 |
| 0.229 | 40.0 | 600 | 0.4402 | 0.8326 | 0.8326 |
| 0.1767 | 53.33 | 800 | 0.5199 | 0.8451 | 0.8452 |
| 0.1331 | 66.67 | 1000 | 0.6064 | 0.8325 | 0.8326 |
| 0.1045 | 80.0 | 1200 | 0.6995 | 0.8409 | 0.8410 |
| 0.0923 | 93.33 | 1400 | 0.6936 | 0.8198 | 0.8201 |
| 0.0705 | 106.67 | 1600 | 0.7835 | 0.8324 | 0.8326 |
| 0.0617 | 120.0 | 1800 | 0.8372 | 0.8075 | 0.8075 |
| 0.0526 | 133.33 | 2000 | 0.8845 | 0.8197 | 0.8201 |
| 0.0463 | 146.67 | 2200 | 0.9266 | 0.8116 | 0.8117 |
| 0.0421 | 160.0 | 2400 | 1.0798 | 0.8321 | 0.8326 |
| 0.0362 | 173.33 | 2600 | 1.0632 | 0.8235 | 0.8243 |
| 0.0321 | 186.67 | 2800 | 1.1024 | 0.8155 | 0.8159 |
| 0.0316 | 200.0 | 3000 | 1.0857 | 0.8194 | 0.8201 |
| 0.0291 | 213.33 | 3200 | 1.0118 | 0.8241 | 0.8243 |
| 0.0264 | 226.67 | 3400 | 1.0152 | 0.8116 | 0.8117 |
| 0.0245 | 240.0 | 3600 | 1.0778 | 0.8159 | 0.8159 |
| 0.0192 | 253.33 | 3800 | 1.2326 | 0.8281 | 0.8285 |
| 0.02 | 266.67 | 4000 | 1.1461 | 0.8241 | 0.8243 |
| 0.0211 | 280.0 | 4200 | 1.1157 | 0.8325 | 0.8326 |
| 0.0202 | 293.33 | 4400 | 1.1613 | 0.8201 | 0.8201 |
| 0.0168 | 306.67 | 4600 | 1.2245 | 0.8282 | 0.8285 |
| 0.0144 | 320.0 | 4800 | 1.1559 | 0.8325 | 0.8326 |
| 0.0151 | 333.33 | 5000 | 1.2483 | 0.8364 | 0.8368 |
| 0.015 | 346.67 | 5200 | 1.2253 | 0.8326 | 0.8326 |
| 0.0148 | 360.0 | 5400 | 1.2649 | 0.8284 | 0.8285 |
| 0.0134 | 373.33 | 5600 | 1.2890 | 0.8285 | 0.8285 |
| 0.0155 | 386.67 | 5800 | 1.2662 | 0.8326 | 0.8326 |
| 0.0115 | 400.0 | 6000 | 1.3286 | 0.8326 | 0.8326 |
| 0.0116 | 413.33 | 6200 | 1.3486 | 0.8324 | 0.8326 |
| 0.0119 | 426.67 | 6400 | 1.2944 | 0.8241 | 0.8243 |
| 0.0112 | 440.0 | 6600 | 1.2818 | 0.8326 | 0.8326 |
| 0.013 | 453.33 | 6800 | 1.2444 | 0.8368 | 0.8368 |
| 0.0079 | 466.67 | 7000 | 1.2534 | 0.8284 | 0.8285 |
| 0.0094 | 480.0 | 7200 | 1.3682 | 0.8448 | 0.8452 |
| 0.0088 | 493.33 | 7400 | 1.3350 | 0.8284 | 0.8285 |
| 0.0081 | 506.67 | 7600 | 1.3950 | 0.8366 | 0.8368 |
| 0.0092 | 520.0 | 7800 | 1.3067 | 0.8326 | 0.8326 |
| 0.0087 | 533.33 | 8000 | 1.3583 | 0.8326 | 0.8326 |
| 0.0094 | 546.67 | 8200 | 1.4055 | 0.8408 | 0.8410 |
| 0.008 | 560.0 | 8400 | 1.3319 | 0.8368 | 0.8368 |
| 0.0071 | 573.33 | 8600 | 1.3699 | 0.8326 | 0.8326 |
| 0.0074 | 586.67 | 8800 | 1.4303 | 0.8324 | 0.8326 |
| 0.0073 | 600.0 | 9000 | 1.3714 | 0.8326 | 0.8326 |
| 0.0081 | 613.33 | 9200 | 1.3644 | 0.8284 | 0.8285 |
| 0.0067 | 626.67 | 9400 | 1.3521 | 0.8325 | 0.8326 |
| 0.007 | 640.0 | 9600 | 1.3531 | 0.8325 | 0.8326 |
| 0.006 | 653.33 | 9800 | 1.3745 | 0.8283 | 0.8285 |
| 0.0067 | 666.67 | 10000 | 1.3686 | 0.8283 | 0.8285 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:45:08+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4323
- F1 Score: 0.8749
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4476 | 9.52 | 200 | 0.3555 | 0.8108 | 0.8110 |
| 0.3206 | 19.05 | 400 | 0.3355 | 0.8413 | 0.8415 |
| 0.2895 | 28.57 | 600 | 0.3284 | 0.8536 | 0.8537 |
| 0.2628 | 38.1 | 800 | 0.3192 | 0.8598 | 0.8598 |
| 0.2341 | 47.62 | 1000 | 0.3126 | 0.8506 | 0.8506 |
| 0.2149 | 57.14 | 1200 | 0.3150 | 0.8689 | 0.8689 |
| 0.1954 | 66.67 | 1400 | 0.3327 | 0.8658 | 0.8659 |
| 0.1826 | 76.19 | 1600 | 0.3650 | 0.8625 | 0.8628 |
| 0.1651 | 85.71 | 1800 | 0.3472 | 0.8627 | 0.8628 |
| 0.1523 | 95.24 | 2000 | 0.3714 | 0.8597 | 0.8598 |
| 0.144 | 104.76 | 2200 | 0.3890 | 0.8596 | 0.8598 |
| 0.136 | 114.29 | 2400 | 0.4043 | 0.8687 | 0.8689 |
| 0.1308 | 123.81 | 2600 | 0.4138 | 0.8718 | 0.8720 |
| 0.1243 | 133.33 | 2800 | 0.4041 | 0.8718 | 0.8720 |
| 0.1185 | 142.86 | 3000 | 0.4698 | 0.8687 | 0.8689 |
| 0.1142 | 152.38 | 3200 | 0.4658 | 0.8778 | 0.8780 |
| 0.106 | 161.9 | 3400 | 0.4865 | 0.8778 | 0.8780 |
| 0.1041 | 171.43 | 3600 | 0.4803 | 0.8809 | 0.8811 |
| 0.0929 | 180.95 | 3800 | 0.5408 | 0.8746 | 0.875 |
| 0.0951 | 190.48 | 4000 | 0.4773 | 0.8780 | 0.8780 |
| 0.0911 | 200.0 | 4200 | 0.5256 | 0.8778 | 0.8780 |
| 0.0887 | 209.52 | 4400 | 0.5495 | 0.8778 | 0.8780 |
| 0.0843 | 219.05 | 4600 | 0.5791 | 0.8623 | 0.8628 |
| 0.0861 | 228.57 | 4800 | 0.5309 | 0.8809 | 0.8811 |
| 0.0803 | 238.1 | 5000 | 0.5498 | 0.8778 | 0.8780 |
| 0.0752 | 247.62 | 5200 | 0.6053 | 0.8715 | 0.8720 |
| 0.0743 | 257.14 | 5400 | 0.5967 | 0.8685 | 0.8689 |
| 0.0765 | 266.67 | 5600 | 0.5486 | 0.8778 | 0.8780 |
| 0.0768 | 276.19 | 5800 | 0.5428 | 0.8778 | 0.8780 |
| 0.0718 | 285.71 | 6000 | 0.5733 | 0.8778 | 0.8780 |
| 0.0696 | 295.24 | 6200 | 0.5869 | 0.8778 | 0.8780 |
| 0.0664 | 304.76 | 6400 | 0.5818 | 0.8809 | 0.8811 |
| 0.0668 | 314.29 | 6600 | 0.6055 | 0.8777 | 0.8780 |
| 0.0624 | 323.81 | 6800 | 0.6224 | 0.8777 | 0.8780 |
| 0.0659 | 333.33 | 7000 | 0.5996 | 0.8778 | 0.8780 |
| 0.0631 | 342.86 | 7200 | 0.5962 | 0.8748 | 0.875 |
| 0.0605 | 352.38 | 7400 | 0.6277 | 0.8717 | 0.8720 |
| 0.0588 | 361.9 | 7600 | 0.6448 | 0.8716 | 0.8720 |
| 0.0575 | 371.43 | 7800 | 0.6577 | 0.8684 | 0.8689 |
| 0.0582 | 380.95 | 8000 | 0.6353 | 0.8717 | 0.8720 |
| 0.0603 | 390.48 | 8200 | 0.6436 | 0.8715 | 0.8720 |
| 0.0597 | 400.0 | 8400 | 0.6446 | 0.8683 | 0.8689 |
| 0.0619 | 409.52 | 8600 | 0.6040 | 0.8747 | 0.875 |
| 0.0538 | 419.05 | 8800 | 0.6475 | 0.8714 | 0.8720 |
| 0.0543 | 428.57 | 9000 | 0.6480 | 0.8715 | 0.8720 |
| 0.0533 | 438.1 | 9200 | 0.6366 | 0.8716 | 0.8720 |
| 0.0588 | 447.62 | 9400 | 0.6348 | 0.8716 | 0.8720 |
| 0.0522 | 457.14 | 9600 | 0.6399 | 0.8716 | 0.8720 |
| 0.0543 | 466.67 | 9800 | 0.6409 | 0.8716 | 0.8720 |
| 0.0535 | 476.19 | 10000 | 0.6396 | 0.8716 | 0.8720 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:45:12+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7217
- F1 Score: 0.8810
- Accuracy: 0.8811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3982 | 9.52 | 200 | 0.3262 | 0.8505 | 0.8506 |
| 0.2581 | 19.05 | 400 | 0.2917 | 0.8841 | 0.8841 |
| 0.1954 | 28.57 | 600 | 0.3037 | 0.8750 | 0.875 |
| 0.1537 | 38.1 | 800 | 0.3400 | 0.8750 | 0.875 |
| 0.1215 | 47.62 | 1000 | 0.3925 | 0.8902 | 0.8902 |
| 0.0994 | 57.14 | 1200 | 0.4933 | 0.8809 | 0.8811 |
| 0.0788 | 66.67 | 1400 | 0.5644 | 0.8777 | 0.8780 |
| 0.071 | 76.19 | 1600 | 0.5420 | 0.8748 | 0.875 |
| 0.0562 | 85.71 | 1800 | 0.5823 | 0.8902 | 0.8902 |
| 0.0485 | 95.24 | 2000 | 0.6354 | 0.8870 | 0.8872 |
| 0.0403 | 104.76 | 2200 | 0.6703 | 0.8780 | 0.8780 |
| 0.0389 | 114.29 | 2400 | 0.6109 | 0.8839 | 0.8841 |
| 0.036 | 123.81 | 2600 | 0.5863 | 0.8871 | 0.8872 |
| 0.0317 | 133.33 | 2800 | 0.6698 | 0.8748 | 0.875 |
| 0.0322 | 142.86 | 3000 | 0.6769 | 0.8687 | 0.8689 |
| 0.0297 | 152.38 | 3200 | 0.6483 | 0.8902 | 0.8902 |
| 0.0231 | 161.9 | 3400 | 0.7186 | 0.8685 | 0.8689 |
| 0.0238 | 171.43 | 3600 | 0.7712 | 0.8779 | 0.8780 |
| 0.0201 | 180.95 | 3800 | 0.7197 | 0.8871 | 0.8872 |
| 0.0189 | 190.48 | 4000 | 0.7338 | 0.8811 | 0.8811 |
| 0.0189 | 200.0 | 4200 | 0.7400 | 0.8809 | 0.8811 |
| 0.018 | 209.52 | 4400 | 0.7246 | 0.8809 | 0.8811 |
| 0.0163 | 219.05 | 4600 | 0.7142 | 0.8809 | 0.8811 |
| 0.0178 | 228.57 | 4800 | 0.7087 | 0.8872 | 0.8872 |
| 0.0124 | 238.1 | 5000 | 0.8295 | 0.8839 | 0.8841 |
| 0.0107 | 247.62 | 5200 | 0.9201 | 0.8746 | 0.875 |
| 0.0126 | 257.14 | 5400 | 0.8516 | 0.8808 | 0.8811 |
| 0.0123 | 266.67 | 5600 | 0.7599 | 0.8871 | 0.8872 |
| 0.0118 | 276.19 | 5800 | 0.7666 | 0.8933 | 0.8933 |
| 0.0109 | 285.71 | 6000 | 0.7882 | 0.8840 | 0.8841 |
| 0.0091 | 295.24 | 6200 | 0.8149 | 0.8871 | 0.8872 |
| 0.0105 | 304.76 | 6400 | 0.7243 | 0.8963 | 0.8963 |
| 0.0111 | 314.29 | 6600 | 0.8182 | 0.8899 | 0.8902 |
| 0.0089 | 323.81 | 6800 | 0.8178 | 0.8901 | 0.8902 |
| 0.0107 | 333.33 | 7000 | 0.7995 | 0.8902 | 0.8902 |
| 0.0082 | 342.86 | 7200 | 0.8293 | 0.8871 | 0.8872 |
| 0.01 | 352.38 | 7400 | 0.7445 | 0.8933 | 0.8933 |
| 0.0088 | 361.9 | 7600 | 0.7924 | 0.8901 | 0.8902 |
| 0.0075 | 371.43 | 7800 | 0.8247 | 0.8870 | 0.8872 |
| 0.0076 | 380.95 | 8000 | 0.8026 | 0.8841 | 0.8841 |
| 0.0074 | 390.48 | 8200 | 0.8535 | 0.8809 | 0.8811 |
| 0.0071 | 400.0 | 8400 | 0.8746 | 0.8839 | 0.8841 |
| 0.0069 | 409.52 | 8600 | 0.8075 | 0.8902 | 0.8902 |
| 0.0054 | 419.05 | 8800 | 0.8182 | 0.8871 | 0.8872 |
| 0.0067 | 428.57 | 9000 | 0.8328 | 0.8809 | 0.8811 |
| 0.0068 | 438.1 | 9200 | 0.8452 | 0.8809 | 0.8811 |
| 0.0059 | 447.62 | 9400 | 0.8438 | 0.8840 | 0.8841 |
| 0.0059 | 457.14 | 9600 | 0.8414 | 0.8840 | 0.8841 |
| 0.0061 | 466.67 | 9800 | 0.8342 | 0.8809 | 0.8811 |
| 0.0054 | 476.19 | 10000 | 0.8414 | 0.8840 | 0.8841 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:45:29+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- F1 Score: 0.8811
- Accuracy: 0.8811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3616 | 9.52 | 200 | 0.3021 | 0.8536 | 0.8537 |
| 0.2006 | 19.05 | 400 | 0.3648 | 0.8678 | 0.8689 |
| 0.1313 | 28.57 | 600 | 0.3987 | 0.8840 | 0.8841 |
| 0.0878 | 38.1 | 800 | 0.4055 | 0.9054 | 0.9055 |
| 0.0608 | 47.62 | 1000 | 0.4380 | 0.8902 | 0.8902 |
| 0.0396 | 57.14 | 1200 | 0.5832 | 0.8993 | 0.8994 |
| 0.0329 | 66.67 | 1400 | 0.5412 | 0.8841 | 0.8841 |
| 0.0297 | 76.19 | 1600 | 0.5713 | 0.8900 | 0.8902 |
| 0.0251 | 85.71 | 1800 | 0.6235 | 0.8870 | 0.8872 |
| 0.0175 | 95.24 | 2000 | 0.6229 | 0.8932 | 0.8933 |
| 0.0146 | 104.76 | 2200 | 0.5887 | 0.9054 | 0.9055 |
| 0.0177 | 114.29 | 2400 | 0.5519 | 0.8901 | 0.8902 |
| 0.0119 | 123.81 | 2600 | 0.6173 | 0.8872 | 0.8872 |
| 0.0113 | 133.33 | 2800 | 0.6440 | 0.8933 | 0.8933 |
| 0.0121 | 142.86 | 3000 | 0.5785 | 0.8963 | 0.8963 |
| 0.0091 | 152.38 | 3200 | 0.6040 | 0.8962 | 0.8963 |
| 0.0081 | 161.9 | 3400 | 0.6695 | 0.8930 | 0.8933 |
| 0.0094 | 171.43 | 3600 | 0.5808 | 0.9207 | 0.9207 |
| 0.0055 | 180.95 | 3800 | 0.6948 | 0.8993 | 0.8994 |
| 0.007 | 190.48 | 4000 | 0.7483 | 0.9115 | 0.9116 |
| 0.0072 | 200.0 | 4200 | 0.6142 | 0.9054 | 0.9055 |
| 0.005 | 209.52 | 4400 | 0.7102 | 0.8993 | 0.8994 |
| 0.007 | 219.05 | 4600 | 0.5958 | 0.8870 | 0.8872 |
| 0.0056 | 228.57 | 4800 | 0.6067 | 0.9085 | 0.9085 |
| 0.0042 | 238.1 | 5000 | 0.7074 | 0.8901 | 0.8902 |
| 0.0038 | 247.62 | 5200 | 0.7191 | 0.8991 | 0.8994 |
| 0.0045 | 257.14 | 5400 | 0.5924 | 0.9116 | 0.9116 |
| 0.0037 | 266.67 | 5600 | 0.6330 | 0.9055 | 0.9055 |
| 0.0031 | 276.19 | 5800 | 0.6398 | 0.9023 | 0.9024 |
| 0.0045 | 285.71 | 6000 | 0.6891 | 0.8993 | 0.8994 |
| 0.0027 | 295.24 | 6200 | 0.7027 | 0.9177 | 0.9177 |
| 0.0033 | 304.76 | 6400 | 0.7020 | 0.9054 | 0.9055 |
| 0.003 | 314.29 | 6600 | 0.7121 | 0.8993 | 0.8994 |
| 0.0026 | 323.81 | 6800 | 0.7751 | 0.8963 | 0.8963 |
| 0.0025 | 333.33 | 7000 | 0.7348 | 0.9085 | 0.9085 |
| 0.0018 | 342.86 | 7200 | 0.7936 | 0.9055 | 0.9055 |
| 0.0028 | 352.38 | 7400 | 0.7236 | 0.9055 | 0.9055 |
| 0.0026 | 361.9 | 7600 | 0.6501 | 0.9054 | 0.9055 |
| 0.0022 | 371.43 | 7800 | 0.6888 | 0.9085 | 0.9085 |
| 0.0017 | 380.95 | 8000 | 0.6895 | 0.9055 | 0.9055 |
| 0.0018 | 390.48 | 8200 | 0.7289 | 0.9116 | 0.9116 |
| 0.0014 | 400.0 | 8400 | 0.7563 | 0.9085 | 0.9085 |
| 0.0016 | 409.52 | 8600 | 0.7084 | 0.9116 | 0.9116 |
| 0.0013 | 419.05 | 8800 | 0.7590 | 0.9085 | 0.9085 |
| 0.0009 | 428.57 | 9000 | 0.7604 | 0.9116 | 0.9116 |
| 0.001 | 438.1 | 9200 | 0.7578 | 0.9055 | 0.9055 |
| 0.0015 | 447.62 | 9400 | 0.7548 | 0.9116 | 0.9116 |
| 0.0007 | 457.14 | 9600 | 0.7872 | 0.8993 | 0.8994 |
| 0.0006 | 466.67 | 9800 | 0.7643 | 0.9116 | 0.9116 |
| 0.001 | 476.19 | 10000 | 0.7701 | 0.9116 | 0.9116 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:46:41+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3804
- F1 Score: 0.8475
- Accuracy: 0.8468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9702 | 0.7 | 200 | 0.9299 | 0.4471 | 0.5631 |
| 0.9012 | 1.4 | 400 | 0.8756 | 0.5528 | 0.5800 |
| 0.7187 | 2.1 | 600 | 0.5714 | 0.7511 | 0.7501 |
| 0.5478 | 2.8 | 800 | 0.5069 | 0.7840 | 0.7830 |
| 0.512 | 3.5 | 1000 | 0.4850 | 0.7925 | 0.7920 |
| 0.498 | 4.2 | 1200 | 0.4768 | 0.8011 | 0.7996 |
| 0.48 | 4.9 | 1400 | 0.4678 | 0.8050 | 0.8047 |
| 0.4727 | 5.59 | 1600 | 0.4686 | 0.8129 | 0.8128 |
| 0.4602 | 6.29 | 1800 | 0.4730 | 0.8044 | 0.8034 |
| 0.4549 | 6.99 | 2000 | 0.4491 | 0.8166 | 0.8157 |
| 0.4493 | 7.69 | 2200 | 0.4262 | 0.8261 | 0.8260 |
| 0.4376 | 8.39 | 2400 | 0.4393 | 0.8219 | 0.8214 |
| 0.4409 | 9.09 | 2600 | 0.4433 | 0.8189 | 0.8178 |
| 0.4333 | 9.79 | 2800 | 0.4359 | 0.8216 | 0.8209 |
| 0.4323 | 10.49 | 3000 | 0.4403 | 0.8205 | 0.8198 |
| 0.423 | 11.19 | 3200 | 0.4466 | 0.8205 | 0.8196 |
| 0.4264 | 11.89 | 3400 | 0.4211 | 0.8289 | 0.8281 |
| 0.4118 | 12.59 | 3600 | 0.4301 | 0.8290 | 0.8284 |
| 0.4198 | 13.29 | 3800 | 0.4175 | 0.8324 | 0.8317 |
| 0.4129 | 13.99 | 4000 | 0.4398 | 0.8220 | 0.8211 |
| 0.4038 | 14.69 | 4200 | 0.4330 | 0.8253 | 0.8244 |
| 0.4148 | 15.38 | 4400 | 0.4241 | 0.8303 | 0.8295 |
| 0.408 | 16.08 | 4600 | 0.4587 | 0.8120 | 0.8113 |
| 0.4066 | 16.78 | 4800 | 0.4184 | 0.8332 | 0.8323 |
| 0.4002 | 17.48 | 5000 | 0.4429 | 0.8217 | 0.8207 |
| 0.4029 | 18.18 | 5200 | 0.4022 | 0.8409 | 0.8402 |
| 0.397 | 18.88 | 5400 | 0.4166 | 0.8345 | 0.8336 |
| 0.3951 | 19.58 | 5600 | 0.4143 | 0.8376 | 0.8369 |
| 0.4009 | 20.28 | 5800 | 0.4117 | 0.8409 | 0.8402 |
| 0.3921 | 20.98 | 6000 | 0.4044 | 0.8399 | 0.8393 |
| 0.3956 | 21.68 | 6200 | 0.4258 | 0.8297 | 0.8290 |
| 0.3906 | 22.38 | 6400 | 0.4151 | 0.8355 | 0.8347 |
| 0.3888 | 23.08 | 6600 | 0.4197 | 0.8327 | 0.8319 |
| 0.3895 | 23.78 | 6800 | 0.4057 | 0.8399 | 0.8391 |
| 0.3905 | 24.48 | 7000 | 0.4212 | 0.8296 | 0.8288 |
| 0.3894 | 25.17 | 7200 | 0.4062 | 0.8378 | 0.8369 |
| 0.3879 | 25.87 | 7400 | 0.4158 | 0.8340 | 0.8332 |
| 0.3817 | 26.57 | 7600 | 0.4236 | 0.8303 | 0.8295 |
| 0.3803 | 27.27 | 7800 | 0.4165 | 0.8346 | 0.8338 |
| 0.382 | 27.97 | 8000 | 0.4152 | 0.8351 | 0.8343 |
| 0.3845 | 28.67 | 8200 | 0.4170 | 0.8359 | 0.8352 |
| 0.3806 | 29.37 | 8400 | 0.4144 | 0.8356 | 0.8347 |
| 0.3754 | 30.07 | 8600 | 0.4066 | 0.8403 | 0.8395 |
| 0.3795 | 30.77 | 8800 | 0.4171 | 0.8325 | 0.8317 |
| 0.3741 | 31.47 | 9000 | 0.4140 | 0.8368 | 0.8360 |
| 0.3847 | 32.17 | 9200 | 0.4102 | 0.8367 | 0.8358 |
| 0.3739 | 32.87 | 9400 | 0.4150 | 0.8368 | 0.8360 |
| 0.3794 | 33.57 | 9600 | 0.4174 | 0.8342 | 0.8334 |
| 0.3826 | 34.27 | 9800 | 0.4145 | 0.8355 | 0.8347 |
| 0.374 | 34.97 | 10000 | 0.4148 | 0.8353 | 0.8345 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:46:51+00:00 |
image-classification | transformers | {} | niraj003/dinov2-s100-201 | null | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:47:45+00:00 |
|
text-generation | null |
# Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF
This model was converted to GGUF format from [`OpenBuddy/openbuddy-llama3-8b-v21.1-8k`](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF --model openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF --model openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf -n 128
```
| {"language": ["zh", "en", "fr", "de", "ja", "ko", "it", "fi"], "license": "other", "tags": ["llama-3", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/"} | Cran-May/openbuddy-llama3-8b-v21.1-8k-Q4_K_S-GGUF | null | [
"gguf",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"license:other",
"region:us"
]
| null | 2024-04-27T05:48:07+00:00 |
null | null | {"license": "openrail"} | shakihcp/egera | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T05:49:02+00:00 |
|
null | null | {"license": "apache-2.0"} | LoserCheems/OTCE-1B | null | [
"license:apache-2.0",
"region:us"
]
| null | 2024-04-27T05:49:09+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3130
- F1 Score: 0.8775
- Accuracy: 0.8770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9542 | 0.7 | 200 | 0.8949 | 0.5097 | 0.5684 |
| 0.7554 | 1.4 | 400 | 0.5420 | 0.7595 | 0.7589 |
| 0.51 | 2.1 | 600 | 0.4631 | 0.8078 | 0.8075 |
| 0.4611 | 2.8 | 800 | 0.4640 | 0.8088 | 0.8080 |
| 0.4456 | 3.5 | 1000 | 0.4273 | 0.8321 | 0.8317 |
| 0.4294 | 4.2 | 1200 | 0.4145 | 0.8327 | 0.8317 |
| 0.4127 | 4.9 | 1400 | 0.4068 | 0.8360 | 0.8354 |
| 0.4057 | 5.59 | 1600 | 0.4357 | 0.8271 | 0.8273 |
| 0.3912 | 6.29 | 1800 | 0.4216 | 0.8320 | 0.8310 |
| 0.381 | 6.99 | 2000 | 0.3908 | 0.8486 | 0.8477 |
| 0.3749 | 7.69 | 2200 | 0.3888 | 0.8480 | 0.8472 |
| 0.3634 | 8.39 | 2400 | 0.3829 | 0.8538 | 0.8534 |
| 0.3617 | 9.09 | 2600 | 0.4030 | 0.8426 | 0.8413 |
| 0.3542 | 9.79 | 2800 | 0.3773 | 0.8507 | 0.8498 |
| 0.353 | 10.49 | 3000 | 0.3784 | 0.8501 | 0.8494 |
| 0.3427 | 11.19 | 3200 | 0.4068 | 0.8419 | 0.8409 |
| 0.3425 | 11.89 | 3400 | 0.3851 | 0.8471 | 0.8461 |
| 0.33 | 12.59 | 3600 | 0.3885 | 0.8495 | 0.8488 |
| 0.3362 | 13.29 | 3800 | 0.3658 | 0.8630 | 0.8621 |
| 0.3251 | 13.99 | 4000 | 0.3974 | 0.8509 | 0.8496 |
| 0.317 | 14.69 | 4200 | 0.4007 | 0.8402 | 0.8393 |
| 0.3252 | 15.38 | 4400 | 0.3611 | 0.8643 | 0.8637 |
| 0.3178 | 16.08 | 4600 | 0.3869 | 0.8531 | 0.8520 |
| 0.3147 | 16.78 | 4800 | 0.3765 | 0.8585 | 0.8577 |
| 0.3071 | 17.48 | 5000 | 0.3780 | 0.8581 | 0.8571 |
| 0.3097 | 18.18 | 5200 | 0.3498 | 0.8665 | 0.8658 |
| 0.3058 | 18.88 | 5400 | 0.3673 | 0.8622 | 0.8615 |
| 0.3024 | 19.58 | 5600 | 0.3531 | 0.8693 | 0.8687 |
| 0.3106 | 20.28 | 5800 | 0.3465 | 0.8713 | 0.8707 |
| 0.2983 | 20.98 | 6000 | 0.3315 | 0.8744 | 0.8740 |
| 0.2992 | 21.68 | 6200 | 0.3573 | 0.8650 | 0.8643 |
| 0.2969 | 22.38 | 6400 | 0.3603 | 0.8659 | 0.8652 |
| 0.2881 | 23.08 | 6600 | 0.3621 | 0.8651 | 0.8643 |
| 0.2931 | 23.78 | 6800 | 0.3485 | 0.8670 | 0.8663 |
| 0.2916 | 24.48 | 7000 | 0.3610 | 0.8631 | 0.8623 |
| 0.2926 | 25.17 | 7200 | 0.3503 | 0.8664 | 0.8656 |
| 0.2901 | 25.87 | 7400 | 0.3512 | 0.8666 | 0.8658 |
| 0.2871 | 26.57 | 7600 | 0.3668 | 0.8577 | 0.8569 |
| 0.2831 | 27.27 | 7800 | 0.3581 | 0.8663 | 0.8656 |
| 0.2859 | 27.97 | 8000 | 0.3566 | 0.8670 | 0.8663 |
| 0.2889 | 28.67 | 8200 | 0.3415 | 0.8713 | 0.8707 |
| 0.2776 | 29.37 | 8400 | 0.3523 | 0.8673 | 0.8665 |
| 0.2781 | 30.07 | 8600 | 0.3478 | 0.8698 | 0.8691 |
| 0.2757 | 30.77 | 8800 | 0.3556 | 0.8669 | 0.8661 |
| 0.2796 | 31.47 | 9000 | 0.3535 | 0.8675 | 0.8667 |
| 0.2835 | 32.17 | 9200 | 0.3457 | 0.8722 | 0.8715 |
| 0.2789 | 32.87 | 9400 | 0.3514 | 0.8693 | 0.8687 |
| 0.2761 | 33.57 | 9600 | 0.3604 | 0.8644 | 0.8637 |
| 0.2775 | 34.27 | 9800 | 0.3541 | 0.8670 | 0.8663 |
| 0.2737 | 34.97 | 10000 | 0.3539 | 0.8681 | 0.8674 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:52:18+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pretraining_MFM_v3
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/deberta-base", "model-index": [{"name": "Pretraining_MFM_v3", "results": []}]} | JJ-Tae/Pretraining_MFM_v3 | null | [
"transformers",
"tensorboard",
"safetensors",
"deberta",
"fill-mask",
"generated_from_trainer",
"base_model:microsoft/deberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:53:37+00:00 |
null | null |
# CAI-Synthetic Model
## Overview
The CAI-Synthetic Model is a large language model designed to understand and respond to complex questions. This model has been fine-tuned on a synthetic dataset from Mostly AI, allowing it to engage in a variety of contexts with reliable responses. It is designed to perform well in diverse scenarios.
## Base Model and Fine-Tuning
- Base Model: Google/Gemma-7b
- Fine-Tuning Adapter: LoRA Adapter
- Synthetic Dataset: Mostly AI Synthetic Dataset
## Licensing and Usage
The CAI-Synthetic Model is licensed under the terms of its base model, Gemma-7b, and the synthetic dataset's licensing agreements. Ensure compliance with any licensing restrictions when using or distributing this model. Attribution to the source of the fine-tuning adapter and the synthetic dataset is required.
## Prompt Configuration
When using this model, you can employ the following prompt structure for interactions:
''''
### Instruction Describe a task that requires a response.
### Instruction: {instruction}
### Response: {response}
''''
## Usage Scenarios
This model is suitable for various applications, including:
## Conversational AI:
Building chatbots and virtual assistants that can respond in different contexts.
## Customer Support:
Providing automated customer service responses.
## Knowledge-based Systems:
Enhancing systems with contextualized responses based on synthetic data.
## Contact Information
For more information about the CAI-Synthetic Model, licensing, or other inquiries, contact [Inner I Network](https://innerinetcompany.com/about/). | {"license": "gemma", "datasets": ["InnerI/CAI-synthetic-10k"]} | InnerI/CAI-synthetic | null | [
"safetensors",
"dataset:InnerI/CAI-synthetic-10k",
"license:gemma",
"region:us"
]
| null | 2024-04-27T05:54:01+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2938
- F1 Score: 0.8963
- Accuracy: 0.8959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.9466 | 0.7 | 200 | 0.8643 | 0.5148 | 0.5831 |
| 0.5956 | 1.4 | 400 | 0.4555 | 0.8066 | 0.8056 |
| 0.4577 | 2.1 | 600 | 0.4274 | 0.8259 | 0.8251 |
| 0.4153 | 2.8 | 800 | 0.4152 | 0.8299 | 0.8290 |
| 0.3897 | 3.5 | 1000 | 0.3685 | 0.8559 | 0.8555 |
| 0.3746 | 4.2 | 1200 | 0.3909 | 0.8437 | 0.8424 |
| 0.358 | 4.9 | 1400 | 0.3652 | 0.8594 | 0.8584 |
| 0.3457 | 5.59 | 1600 | 0.3913 | 0.8513 | 0.8509 |
| 0.3354 | 6.29 | 1800 | 0.4242 | 0.8295 | 0.8284 |
| 0.3228 | 6.99 | 2000 | 0.3479 | 0.8695 | 0.8687 |
| 0.3119 | 7.69 | 2200 | 0.3577 | 0.8613 | 0.8604 |
| 0.3025 | 8.39 | 2400 | 0.3457 | 0.8699 | 0.8694 |
| 0.3012 | 9.09 | 2600 | 0.3635 | 0.8613 | 0.8599 |
| 0.288 | 9.79 | 2800 | 0.3310 | 0.8762 | 0.8755 |
| 0.2873 | 10.49 | 3000 | 0.3297 | 0.8811 | 0.8805 |
| 0.2744 | 11.19 | 3200 | 0.3476 | 0.8710 | 0.8702 |
| 0.2757 | 11.89 | 3400 | 0.3811 | 0.8562 | 0.8551 |
| 0.2588 | 12.59 | 3600 | 0.3474 | 0.8696 | 0.8689 |
| 0.2623 | 13.29 | 3800 | 0.3304 | 0.8825 | 0.8816 |
| 0.2531 | 13.99 | 4000 | 0.3333 | 0.8779 | 0.8770 |
| 0.2449 | 14.69 | 4200 | 0.3418 | 0.8759 | 0.8751 |
| 0.2511 | 15.38 | 4400 | 0.3267 | 0.8831 | 0.8825 |
| 0.2379 | 16.08 | 4600 | 0.3480 | 0.8743 | 0.8735 |
| 0.2355 | 16.78 | 4800 | 0.3266 | 0.8795 | 0.8788 |
| 0.2293 | 17.48 | 5000 | 0.3219 | 0.8859 | 0.8851 |
| 0.2314 | 18.18 | 5200 | 0.3096 | 0.8926 | 0.8922 |
| 0.225 | 18.88 | 5400 | 0.3123 | 0.8881 | 0.8875 |
| 0.2203 | 19.58 | 5600 | 0.3278 | 0.8833 | 0.8827 |
| 0.2245 | 20.28 | 5800 | 0.2965 | 0.8963 | 0.8959 |
| 0.2128 | 20.98 | 6000 | 0.2976 | 0.8982 | 0.8979 |
| 0.2138 | 21.68 | 6200 | 0.2932 | 0.8977 | 0.8974 |
| 0.2074 | 22.38 | 6400 | 0.3216 | 0.8902 | 0.8895 |
| 0.2046 | 23.08 | 6600 | 0.3221 | 0.8897 | 0.8891 |
| 0.2065 | 23.78 | 6800 | 0.3026 | 0.8968 | 0.8963 |
| 0.2015 | 24.48 | 7000 | 0.3030 | 0.8983 | 0.8979 |
| 0.2007 | 25.17 | 7200 | 0.3208 | 0.8877 | 0.8871 |
| 0.1996 | 25.87 | 7400 | 0.3060 | 0.8949 | 0.8943 |
| 0.1945 | 26.57 | 7600 | 0.3219 | 0.8891 | 0.8884 |
| 0.1929 | 27.27 | 7800 | 0.3086 | 0.8948 | 0.8943 |
| 0.1935 | 27.97 | 8000 | 0.3144 | 0.8948 | 0.8943 |
| 0.1936 | 28.67 | 8200 | 0.3078 | 0.8966 | 0.8961 |
| 0.1836 | 29.37 | 8400 | 0.3153 | 0.8927 | 0.8922 |
| 0.1815 | 30.07 | 8600 | 0.3117 | 0.8970 | 0.8965 |
| 0.183 | 30.77 | 8800 | 0.3181 | 0.8949 | 0.8943 |
| 0.1865 | 31.47 | 9000 | 0.3161 | 0.8960 | 0.8954 |
| 0.1859 | 32.17 | 9200 | 0.3103 | 0.8981 | 0.8976 |
| 0.1802 | 32.87 | 9400 | 0.3170 | 0.8957 | 0.8952 |
| 0.1806 | 33.57 | 9600 | 0.3252 | 0.8925 | 0.8919 |
| 0.1803 | 34.27 | 9800 | 0.3181 | 0.8957 | 0.8952 |
| 0.1787 | 34.97 | 10000 | 0.3147 | 0.8968 | 0.8963 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:54:39+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3607
- F1 Score: 0.8409
- Accuracy: 0.841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5635 | 0.79 | 200 | 0.4994 | 0.7418 | 0.742 |
| 0.4892 | 1.58 | 400 | 0.4840 | 0.7565 | 0.757 |
| 0.4811 | 2.37 | 600 | 0.4766 | 0.7625 | 0.763 |
| 0.4704 | 3.16 | 800 | 0.4816 | 0.7579 | 0.758 |
| 0.4635 | 3.95 | 1000 | 0.4639 | 0.7697 | 0.77 |
| 0.4626 | 4.74 | 1200 | 0.4702 | 0.7750 | 0.775 |
| 0.4589 | 5.53 | 1400 | 0.4735 | 0.7728 | 0.773 |
| 0.4547 | 6.32 | 1600 | 0.4753 | 0.7583 | 0.759 |
| 0.4563 | 7.11 | 1800 | 0.4752 | 0.7665 | 0.767 |
| 0.457 | 7.91 | 2000 | 0.4700 | 0.7717 | 0.772 |
| 0.4517 | 8.7 | 2200 | 0.4640 | 0.7719 | 0.772 |
| 0.4519 | 9.49 | 2400 | 0.4543 | 0.7920 | 0.792 |
| 0.4498 | 10.28 | 2600 | 0.4856 | 0.7534 | 0.755 |
| 0.4463 | 11.07 | 2800 | 0.4689 | 0.7715 | 0.772 |
| 0.4448 | 11.86 | 3000 | 0.4686 | 0.7726 | 0.773 |
| 0.447 | 12.65 | 3200 | 0.4704 | 0.7653 | 0.766 |
| 0.4433 | 13.44 | 3400 | 0.4580 | 0.7831 | 0.783 |
| 0.4428 | 14.23 | 3600 | 0.4570 | 0.7821 | 0.782 |
| 0.4448 | 15.02 | 3800 | 0.4687 | 0.7777 | 0.778 |
| 0.445 | 15.81 | 4000 | 0.4620 | 0.7736 | 0.774 |
| 0.4408 | 16.6 | 4200 | 0.4574 | 0.7890 | 0.789 |
| 0.4412 | 17.39 | 4400 | 0.4755 | 0.7693 | 0.77 |
| 0.4398 | 18.18 | 4600 | 0.4620 | 0.7810 | 0.781 |
| 0.4374 | 18.97 | 4800 | 0.4671 | 0.7715 | 0.772 |
| 0.4416 | 19.76 | 5000 | 0.4561 | 0.7900 | 0.79 |
| 0.4368 | 20.55 | 5200 | 0.4514 | 0.7950 | 0.795 |
| 0.4365 | 21.34 | 5400 | 0.4618 | 0.7778 | 0.778 |
| 0.4352 | 22.13 | 5600 | 0.4628 | 0.7849 | 0.785 |
| 0.4399 | 22.92 | 5800 | 0.4552 | 0.7911 | 0.791 |
| 0.4322 | 23.72 | 6000 | 0.4633 | 0.7849 | 0.785 |
| 0.4361 | 24.51 | 6200 | 0.4529 | 0.7901 | 0.79 |
| 0.4389 | 25.3 | 6400 | 0.4563 | 0.7900 | 0.79 |
| 0.4339 | 26.09 | 6600 | 0.4562 | 0.7900 | 0.79 |
| 0.4333 | 26.88 | 6800 | 0.4605 | 0.7899 | 0.79 |
| 0.4344 | 27.67 | 7000 | 0.4522 | 0.7920 | 0.792 |
| 0.4323 | 28.46 | 7200 | 0.4511 | 0.7900 | 0.79 |
| 0.4334 | 29.25 | 7400 | 0.4550 | 0.7921 | 0.792 |
| 0.4367 | 30.04 | 7600 | 0.4547 | 0.7931 | 0.793 |
| 0.4336 | 30.83 | 7800 | 0.4574 | 0.7890 | 0.789 |
| 0.4332 | 31.62 | 8000 | 0.4493 | 0.7910 | 0.791 |
| 0.4336 | 32.41 | 8200 | 0.4571 | 0.7880 | 0.788 |
| 0.4285 | 33.2 | 8400 | 0.4565 | 0.7860 | 0.786 |
| 0.4357 | 33.99 | 8600 | 0.4540 | 0.7951 | 0.795 |
| 0.4337 | 34.78 | 8800 | 0.4518 | 0.7901 | 0.79 |
| 0.4274 | 35.57 | 9000 | 0.4544 | 0.7921 | 0.792 |
| 0.43 | 36.36 | 9200 | 0.4592 | 0.7910 | 0.791 |
| 0.4333 | 37.15 | 9400 | 0.4599 | 0.7879 | 0.788 |
| 0.4312 | 37.94 | 9600 | 0.4565 | 0.7940 | 0.794 |
| 0.4336 | 38.74 | 9800 | 0.4573 | 0.7930 | 0.793 |
| 0.4316 | 39.53 | 10000 | 0.4571 | 0.7940 | 0.794 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:55:03+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3528
- F1 Score: 0.8428
- Accuracy: 0.843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5386 | 0.79 | 200 | 0.4856 | 0.7561 | 0.757 |
| 0.4747 | 1.58 | 400 | 0.4722 | 0.7628 | 0.763 |
| 0.4664 | 2.37 | 600 | 0.4645 | 0.7786 | 0.779 |
| 0.4576 | 3.16 | 800 | 0.4698 | 0.7679 | 0.768 |
| 0.4508 | 3.95 | 1000 | 0.4534 | 0.7846 | 0.785 |
| 0.4487 | 4.74 | 1200 | 0.4598 | 0.7800 | 0.78 |
| 0.4443 | 5.53 | 1400 | 0.4762 | 0.7722 | 0.773 |
| 0.4404 | 6.32 | 1600 | 0.4650 | 0.7797 | 0.78 |
| 0.4415 | 7.11 | 1800 | 0.4669 | 0.7714 | 0.772 |
| 0.4397 | 7.91 | 2000 | 0.4687 | 0.7754 | 0.776 |
| 0.4345 | 8.7 | 2200 | 0.4578 | 0.792 | 0.792 |
| 0.4336 | 9.49 | 2400 | 0.4502 | 0.7850 | 0.785 |
| 0.432 | 10.28 | 2600 | 0.4730 | 0.7679 | 0.769 |
| 0.4287 | 11.07 | 2800 | 0.4664 | 0.7701 | 0.771 |
| 0.4263 | 11.86 | 3000 | 0.4631 | 0.7797 | 0.78 |
| 0.4252 | 12.65 | 3200 | 0.4613 | 0.7767 | 0.777 |
| 0.4226 | 13.44 | 3400 | 0.4561 | 0.7930 | 0.793 |
| 0.4222 | 14.23 | 3600 | 0.4577 | 0.7860 | 0.786 |
| 0.422 | 15.02 | 3800 | 0.4680 | 0.7807 | 0.781 |
| 0.4211 | 15.81 | 4000 | 0.4573 | 0.7815 | 0.782 |
| 0.4155 | 16.6 | 4200 | 0.4588 | 0.7861 | 0.786 |
| 0.4175 | 17.39 | 4400 | 0.4747 | 0.7709 | 0.772 |
| 0.4147 | 18.18 | 4600 | 0.4597 | 0.7820 | 0.782 |
| 0.4111 | 18.97 | 4800 | 0.4718 | 0.7702 | 0.771 |
| 0.4146 | 19.76 | 5000 | 0.4620 | 0.7798 | 0.78 |
| 0.4133 | 20.55 | 5200 | 0.4548 | 0.7851 | 0.785 |
| 0.4074 | 21.34 | 5400 | 0.4699 | 0.7678 | 0.769 |
| 0.4074 | 22.13 | 5600 | 0.4736 | 0.7747 | 0.775 |
| 0.411 | 22.92 | 5800 | 0.4597 | 0.7799 | 0.78 |
| 0.4029 | 23.72 | 6000 | 0.4688 | 0.7748 | 0.775 |
| 0.4073 | 24.51 | 6200 | 0.4631 | 0.7869 | 0.787 |
| 0.4092 | 25.3 | 6400 | 0.4622 | 0.7830 | 0.783 |
| 0.4031 | 26.09 | 6600 | 0.4634 | 0.7859 | 0.786 |
| 0.402 | 26.88 | 6800 | 0.4682 | 0.7858 | 0.786 |
| 0.402 | 27.67 | 7000 | 0.4595 | 0.7851 | 0.785 |
| 0.4007 | 28.46 | 7200 | 0.4630 | 0.7871 | 0.787 |
| 0.4028 | 29.25 | 7400 | 0.4655 | 0.7789 | 0.779 |
| 0.4023 | 30.04 | 7600 | 0.4693 | 0.7819 | 0.782 |
| 0.4009 | 30.83 | 7800 | 0.4683 | 0.7859 | 0.786 |
| 0.4018 | 31.62 | 8000 | 0.4613 | 0.7881 | 0.788 |
| 0.4021 | 32.41 | 8200 | 0.4691 | 0.7799 | 0.78 |
| 0.3937 | 33.2 | 8400 | 0.4662 | 0.7859 | 0.786 |
| 0.4001 | 33.99 | 8600 | 0.4675 | 0.7860 | 0.786 |
| 0.3996 | 34.78 | 8800 | 0.4635 | 0.7870 | 0.787 |
| 0.3931 | 35.57 | 9000 | 0.4651 | 0.7840 | 0.784 |
| 0.3965 | 36.36 | 9200 | 0.4731 | 0.7819 | 0.782 |
| 0.3971 | 37.15 | 9400 | 0.4751 | 0.7738 | 0.774 |
| 0.3951 | 37.94 | 9600 | 0.4701 | 0.7820 | 0.782 |
| 0.4001 | 38.74 | 9800 | 0.4709 | 0.7779 | 0.778 |
| 0.3961 | 39.53 | 10000 | 0.4705 | 0.7819 | 0.782 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:55:03+00:00 |
automatic-speech-recognition | transformers | {} | Purukoli/whisper-hindi | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:55:15+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3627
- F1 Score: 0.8366
- Accuracy: 0.837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5192 | 0.79 | 200 | 0.4765 | 0.7689 | 0.769 |
| 0.467 | 1.58 | 400 | 0.4630 | 0.7830 | 0.783 |
| 0.4572 | 2.37 | 600 | 0.4575 | 0.7893 | 0.79 |
| 0.4488 | 3.16 | 800 | 0.4623 | 0.7800 | 0.78 |
| 0.4411 | 3.95 | 1000 | 0.4550 | 0.7787 | 0.779 |
| 0.4383 | 4.74 | 1200 | 0.4547 | 0.7910 | 0.791 |
| 0.4314 | 5.53 | 1400 | 0.4777 | 0.7703 | 0.771 |
| 0.4266 | 6.32 | 1600 | 0.4651 | 0.7869 | 0.787 |
| 0.4256 | 7.11 | 1800 | 0.4684 | 0.7716 | 0.772 |
| 0.423 | 7.91 | 2000 | 0.4630 | 0.7737 | 0.774 |
| 0.4161 | 8.7 | 2200 | 0.4715 | 0.7729 | 0.773 |
| 0.4123 | 9.49 | 2400 | 0.4632 | 0.7810 | 0.781 |
| 0.4114 | 10.28 | 2600 | 0.4778 | 0.7755 | 0.776 |
| 0.4068 | 11.07 | 2800 | 0.4784 | 0.7678 | 0.768 |
| 0.4019 | 11.86 | 3000 | 0.4931 | 0.7768 | 0.777 |
| 0.3986 | 12.65 | 3200 | 0.4738 | 0.7800 | 0.78 |
| 0.394 | 13.44 | 3400 | 0.4854 | 0.7831 | 0.783 |
| 0.3927 | 14.23 | 3600 | 0.4796 | 0.7750 | 0.775 |
| 0.392 | 15.02 | 3800 | 0.4955 | 0.7735 | 0.774 |
| 0.3875 | 15.81 | 4000 | 0.4666 | 0.7750 | 0.775 |
| 0.3823 | 16.6 | 4200 | 0.4937 | 0.7691 | 0.769 |
| 0.3833 | 17.39 | 4400 | 0.4885 | 0.7605 | 0.761 |
| 0.3799 | 18.18 | 4600 | 0.4851 | 0.7731 | 0.773 |
| 0.3747 | 18.97 | 4800 | 0.4933 | 0.7674 | 0.768 |
| 0.3769 | 19.76 | 5000 | 0.4682 | 0.7771 | 0.777 |
| 0.3734 | 20.55 | 5200 | 0.4840 | 0.7700 | 0.77 |
| 0.3646 | 21.34 | 5400 | 0.4968 | 0.7603 | 0.761 |
| 0.3601 | 22.13 | 5600 | 0.5059 | 0.7688 | 0.769 |
| 0.3671 | 22.92 | 5800 | 0.4913 | 0.7700 | 0.77 |
| 0.3548 | 23.72 | 6000 | 0.4869 | 0.7840 | 0.784 |
| 0.3578 | 24.51 | 6200 | 0.4793 | 0.7769 | 0.777 |
| 0.3618 | 25.3 | 6400 | 0.4879 | 0.7729 | 0.773 |
| 0.3515 | 26.09 | 6600 | 0.4902 | 0.7791 | 0.779 |
| 0.3503 | 26.88 | 6800 | 0.4937 | 0.7790 | 0.779 |
| 0.3485 | 27.67 | 7000 | 0.4882 | 0.7821 | 0.782 |
| 0.3447 | 28.46 | 7200 | 0.5060 | 0.7841 | 0.784 |
| 0.3469 | 29.25 | 7400 | 0.5030 | 0.7760 | 0.776 |
| 0.346 | 30.04 | 7600 | 0.5076 | 0.7739 | 0.774 |
| 0.3403 | 30.83 | 7800 | 0.5044 | 0.7770 | 0.777 |
| 0.3414 | 31.62 | 8000 | 0.5016 | 0.7890 | 0.789 |
| 0.3419 | 32.41 | 8200 | 0.5121 | 0.7749 | 0.775 |
| 0.334 | 33.2 | 8400 | 0.5049 | 0.7770 | 0.777 |
| 0.3389 | 33.99 | 8600 | 0.5084 | 0.7780 | 0.778 |
| 0.3376 | 34.78 | 8800 | 0.4986 | 0.7871 | 0.787 |
| 0.3305 | 35.57 | 9000 | 0.5059 | 0.7831 | 0.783 |
| 0.3336 | 36.36 | 9200 | 0.5192 | 0.7709 | 0.771 |
| 0.3339 | 37.15 | 9400 | 0.5232 | 0.7748 | 0.775 |
| 0.33 | 37.94 | 9600 | 0.5195 | 0.7729 | 0.773 |
| 0.3343 | 38.74 | 9800 | 0.5196 | 0.7770 | 0.777 |
| 0.3301 | 39.53 | 10000 | 0.5200 | 0.7750 | 0.775 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:55:35+00:00 |
null | null | {} | LeeThanh/chenkin | null | [
"region:us"
]
| null | 2024-04-27T05:55:36+00:00 |
|
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ai-human-lab/EEVE-Korean-10.8B-enko-translate-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T05:56:01+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3506
- F1 Score: 0.8549
- Accuracy: 0.855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.581 | 0.83 | 200 | 0.5489 | 0.7280 | 0.728 |
| 0.512 | 1.67 | 400 | 0.5263 | 0.7469 | 0.747 |
| 0.4971 | 2.5 | 600 | 0.5195 | 0.7469 | 0.747 |
| 0.4874 | 3.33 | 800 | 0.5201 | 0.7393 | 0.74 |
| 0.4892 | 4.17 | 1000 | 0.5116 | 0.7458 | 0.746 |
| 0.4787 | 5.0 | 1200 | 0.5165 | 0.7476 | 0.748 |
| 0.4779 | 5.83 | 1400 | 0.5126 | 0.7477 | 0.748 |
| 0.4774 | 6.67 | 1600 | 0.5136 | 0.7475 | 0.748 |
| 0.4745 | 7.5 | 1800 | 0.5076 | 0.7480 | 0.748 |
| 0.4713 | 8.33 | 2000 | 0.5143 | 0.7500 | 0.751 |
| 0.4717 | 9.17 | 2200 | 0.5100 | 0.7386 | 0.739 |
| 0.4705 | 10.0 | 2400 | 0.5214 | 0.7446 | 0.746 |
| 0.4697 | 10.83 | 2600 | 0.5145 | 0.7435 | 0.745 |
| 0.469 | 11.67 | 2800 | 0.5212 | 0.7442 | 0.746 |
| 0.4586 | 12.5 | 3000 | 0.5150 | 0.7424 | 0.744 |
| 0.47 | 13.33 | 3200 | 0.5163 | 0.7432 | 0.745 |
| 0.4622 | 14.17 | 3400 | 0.5057 | 0.7339 | 0.734 |
| 0.4623 | 15.0 | 3600 | 0.5242 | 0.7416 | 0.744 |
| 0.461 | 15.83 | 3800 | 0.5069 | 0.7333 | 0.734 |
| 0.4661 | 16.67 | 4000 | 0.5195 | 0.7411 | 0.743 |
| 0.4596 | 17.5 | 4200 | 0.5153 | 0.7424 | 0.744 |
| 0.4562 | 18.33 | 4400 | 0.5202 | 0.7429 | 0.744 |
| 0.4605 | 19.17 | 4600 | 0.5175 | 0.7424 | 0.744 |
| 0.4605 | 20.0 | 4800 | 0.5091 | 0.7470 | 0.748 |
| 0.4601 | 20.83 | 5000 | 0.5126 | 0.7422 | 0.743 |
| 0.4548 | 21.67 | 5200 | 0.5120 | 0.7410 | 0.742 |
| 0.4566 | 22.5 | 5400 | 0.5085 | 0.7386 | 0.739 |
| 0.4576 | 23.33 | 5600 | 0.5144 | 0.7407 | 0.742 |
| 0.4551 | 24.17 | 5800 | 0.5216 | 0.7393 | 0.741 |
| 0.4569 | 25.0 | 6000 | 0.5070 | 0.7338 | 0.734 |
| 0.4543 | 25.83 | 6200 | 0.5109 | 0.7381 | 0.739 |
| 0.4517 | 26.67 | 6400 | 0.5067 | 0.7379 | 0.738 |
| 0.4559 | 27.5 | 6600 | 0.5136 | 0.7412 | 0.742 |
| 0.4542 | 28.33 | 6800 | 0.5107 | 0.7412 | 0.742 |
| 0.454 | 29.17 | 7000 | 0.5107 | 0.7414 | 0.742 |
| 0.4547 | 30.0 | 7200 | 0.5112 | 0.7429 | 0.744 |
| 0.4558 | 30.83 | 7400 | 0.5196 | 0.7431 | 0.745 |
| 0.4514 | 31.67 | 7600 | 0.5059 | 0.7376 | 0.738 |
| 0.4546 | 32.5 | 7800 | 0.5075 | 0.7424 | 0.743 |
| 0.4499 | 33.33 | 8000 | 0.5113 | 0.7391 | 0.74 |
| 0.4561 | 34.17 | 8200 | 0.5075 | 0.7385 | 0.739 |
| 0.4503 | 35.0 | 8400 | 0.5075 | 0.7396 | 0.74 |
| 0.4551 | 35.83 | 8600 | 0.5081 | 0.7411 | 0.742 |
| 0.4535 | 36.67 | 8800 | 0.5095 | 0.7403 | 0.741 |
| 0.4489 | 37.5 | 9000 | 0.5168 | 0.7431 | 0.745 |
| 0.4517 | 38.33 | 9200 | 0.5100 | 0.7403 | 0.741 |
| 0.4498 | 39.17 | 9400 | 0.5097 | 0.7414 | 0.742 |
| 0.4526 | 40.0 | 9600 | 0.5103 | 0.7420 | 0.743 |
| 0.4508 | 40.83 | 9800 | 0.5082 | 0.7376 | 0.738 |
| 0.4508 | 41.67 | 10000 | 0.5093 | 0.7412 | 0.742 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T05:56:38+00:00 |
text2text-generation | transformers | {} | megasiska86/bart-trained | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T05:58:58+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** xsa-dev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | xsa-dev/hugs_llama3_technique_ft_16bit | null | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:00:12+00:00 |
null | null | {"license": "mit"} | rasheduzzaman/Bangla_law_model | null | [
"safetensors",
"license:mit",
"region:us"
]
| null | 2024-04-27T06:01:25+00:00 |
|
text-generation | transformers |
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["openwebtext"], "co2_eq_emissions": 149200, "model-index": [{"name": "distilgpt2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "WikiText-103", "type": "wikitext"}, "metrics": [{"type": "perplexity", "value": 21.1, "name": "Perplexity"}]}]}]} | jiajiahong2134/DLhw2 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T06:01:51+00:00 |
null | mlx |
# mlx-community/MXLewd-L2-20B-4bit
This model was converted to MLX format from [`Undi95/MXLewd-L2-20B`]().
Refer to the [original model card](https://huggingface.co/Undi95/MXLewd-L2-20B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/MXLewd-L2-20B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "cc-by-nc-4.0", "tags": ["mlx"]} | mlx-community/MXLewd-L2-20B-4bit | null | [
"mlx",
"safetensors",
"llama",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-04-27T06:02:31+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | sophiex/pythia-1b-sft_hh_rlhf | null | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T06:02:57+00:00 |
null | null | {} | ACEGameAI/Josef-Ali_ohwx-man | null | [
"region:us"
]
| null | 2024-04-27T06:05:21+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Hariharan345/tinyllama-momxchat-v1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T06:05:32+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3572
- F1 Score: 0.8540
- Accuracy: 0.855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5461 | 0.83 | 200 | 0.5206 | 0.7425 | 0.743 |
| 0.4884 | 1.67 | 400 | 0.5083 | 0.744 | 0.744 |
| 0.4765 | 2.5 | 600 | 0.5088 | 0.7569 | 0.757 |
| 0.4668 | 3.33 | 800 | 0.5015 | 0.7396 | 0.74 |
| 0.4674 | 4.17 | 1000 | 0.5042 | 0.7445 | 0.746 |
| 0.4556 | 5.0 | 1200 | 0.5099 | 0.7517 | 0.753 |
| 0.4527 | 5.83 | 1400 | 0.4961 | 0.7490 | 0.749 |
| 0.4483 | 6.67 | 1600 | 0.4996 | 0.7504 | 0.751 |
| 0.4442 | 7.5 | 1800 | 0.4995 | 0.7598 | 0.76 |
| 0.435 | 8.33 | 2000 | 0.5027 | 0.7499 | 0.75 |
| 0.4364 | 9.17 | 2200 | 0.5055 | 0.7559 | 0.756 |
| 0.4323 | 10.0 | 2400 | 0.5250 | 0.7421 | 0.744 |
| 0.4288 | 10.83 | 2600 | 0.5077 | 0.7416 | 0.743 |
| 0.4252 | 11.67 | 2800 | 0.5144 | 0.7510 | 0.752 |
| 0.4135 | 12.5 | 3000 | 0.5219 | 0.7497 | 0.751 |
| 0.422 | 13.33 | 3200 | 0.5150 | 0.7361 | 0.737 |
| 0.4098 | 14.17 | 3400 | 0.5238 | 0.7560 | 0.756 |
| 0.4104 | 15.0 | 3600 | 0.5316 | 0.7461 | 0.747 |
| 0.403 | 15.83 | 3800 | 0.5142 | 0.7455 | 0.746 |
| 0.404 | 16.67 | 4000 | 0.5393 | 0.7496 | 0.75 |
| 0.3993 | 17.5 | 4200 | 0.5363 | 0.7376 | 0.739 |
| 0.391 | 18.33 | 4400 | 0.5484 | 0.7389 | 0.74 |
| 0.3958 | 19.17 | 4600 | 0.5428 | 0.7402 | 0.741 |
| 0.3903 | 20.0 | 4800 | 0.5299 | 0.7449 | 0.745 |
| 0.3883 | 20.83 | 5000 | 0.5338 | 0.7429 | 0.743 |
| 0.3821 | 21.67 | 5200 | 0.5431 | 0.7436 | 0.744 |
| 0.3772 | 22.5 | 5400 | 0.5500 | 0.7391 | 0.74 |
| 0.3793 | 23.33 | 5600 | 0.5558 | 0.7322 | 0.734 |
| 0.375 | 24.17 | 5800 | 0.5617 | 0.7370 | 0.738 |
| 0.3756 | 25.0 | 6000 | 0.5468 | 0.7349 | 0.735 |
| 0.3696 | 25.83 | 6200 | 0.5491 | 0.7346 | 0.735 |
| 0.3615 | 26.67 | 6400 | 0.5616 | 0.7440 | 0.744 |
| 0.3633 | 27.5 | 6600 | 0.5913 | 0.7408 | 0.741 |
| 0.3619 | 28.33 | 6800 | 0.5796 | 0.7369 | 0.737 |
| 0.3594 | 29.17 | 7000 | 0.5640 | 0.7359 | 0.736 |
| 0.3591 | 30.0 | 7200 | 0.5710 | 0.7379 | 0.738 |
| 0.3572 | 30.83 | 7400 | 0.5823 | 0.7269 | 0.728 |
| 0.3524 | 31.67 | 7600 | 0.5870 | 0.7349 | 0.735 |
| 0.3533 | 32.5 | 7800 | 0.5801 | 0.7348 | 0.735 |
| 0.3502 | 33.33 | 8000 | 0.5838 | 0.7294 | 0.73 |
| 0.3532 | 34.17 | 8200 | 0.5757 | 0.7389 | 0.739 |
| 0.3441 | 35.0 | 8400 | 0.5883 | 0.7328 | 0.733 |
| 0.3463 | 35.83 | 8600 | 0.5815 | 0.7278 | 0.728 |
| 0.3462 | 36.67 | 8800 | 0.5869 | 0.7277 | 0.728 |
| 0.3382 | 37.5 | 9000 | 0.6033 | 0.7240 | 0.725 |
| 0.3426 | 38.33 | 9200 | 0.6004 | 0.7287 | 0.729 |
| 0.3371 | 39.17 | 9400 | 0.6018 | 0.7327 | 0.733 |
| 0.3423 | 40.0 | 9600 | 0.5990 | 0.7277 | 0.728 |
| 0.34 | 40.83 | 9800 | 0.5971 | 0.7298 | 0.73 |
| 0.3378 | 41.67 | 10000 | 0.5986 | 0.7266 | 0.727 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:05:36+00:00 |
null | null | {} | ACEGameAI/Markus-Greenberg_ohwx-man | null | [
"region:us"
]
| null | 2024-04-27T06:05:36+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3453
- F1 Score: 0.8467
- Accuracy: 0.847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5564 | 0.83 | 200 | 0.5261 | 0.7419 | 0.742 |
| 0.4934 | 1.67 | 400 | 0.5128 | 0.7410 | 0.741 |
| 0.4824 | 2.5 | 600 | 0.5134 | 0.7507 | 0.751 |
| 0.4741 | 3.33 | 800 | 0.5098 | 0.7365 | 0.737 |
| 0.4762 | 4.17 | 1000 | 0.5133 | 0.7387 | 0.74 |
| 0.4669 | 5.0 | 1200 | 0.5146 | 0.7455 | 0.747 |
| 0.4654 | 5.83 | 1400 | 0.5056 | 0.7405 | 0.741 |
| 0.4636 | 6.67 | 1600 | 0.5076 | 0.7384 | 0.739 |
| 0.4609 | 7.5 | 1800 | 0.5012 | 0.7420 | 0.742 |
| 0.4538 | 8.33 | 2000 | 0.5043 | 0.7394 | 0.74 |
| 0.4554 | 9.17 | 2200 | 0.5055 | 0.7548 | 0.755 |
| 0.4538 | 10.0 | 2400 | 0.5309 | 0.7361 | 0.739 |
| 0.4509 | 10.83 | 2600 | 0.5123 | 0.7422 | 0.744 |
| 0.4496 | 11.67 | 2800 | 0.5134 | 0.7388 | 0.741 |
| 0.4383 | 12.5 | 3000 | 0.5055 | 0.7491 | 0.75 |
| 0.4496 | 13.33 | 3200 | 0.5057 | 0.7433 | 0.745 |
| 0.4409 | 14.17 | 3400 | 0.4966 | 0.752 | 0.752 |
| 0.4385 | 15.0 | 3600 | 0.5030 | 0.7558 | 0.757 |
| 0.4371 | 15.83 | 3800 | 0.4960 | 0.7544 | 0.755 |
| 0.4385 | 16.67 | 4000 | 0.5045 | 0.7574 | 0.758 |
| 0.4347 | 17.5 | 4200 | 0.5035 | 0.7507 | 0.752 |
| 0.429 | 18.33 | 4400 | 0.5085 | 0.7593 | 0.76 |
| 0.4354 | 19.17 | 4600 | 0.5055 | 0.7481 | 0.749 |
| 0.4323 | 20.0 | 4800 | 0.4935 | 0.7597 | 0.76 |
| 0.4319 | 20.83 | 5000 | 0.4992 | 0.7537 | 0.754 |
| 0.4267 | 21.67 | 5200 | 0.4983 | 0.7575 | 0.758 |
| 0.4249 | 22.5 | 5400 | 0.4994 | 0.7468 | 0.747 |
| 0.4265 | 23.33 | 5600 | 0.5038 | 0.7470 | 0.748 |
| 0.4253 | 24.17 | 5800 | 0.5070 | 0.7510 | 0.752 |
| 0.4262 | 25.0 | 6000 | 0.4912 | 0.7510 | 0.751 |
| 0.424 | 25.83 | 6200 | 0.4955 | 0.7597 | 0.76 |
| 0.4191 | 26.67 | 6400 | 0.4953 | 0.7620 | 0.762 |
| 0.4231 | 27.5 | 6600 | 0.5051 | 0.7638 | 0.764 |
| 0.4192 | 28.33 | 6800 | 0.4985 | 0.7497 | 0.75 |
| 0.4207 | 29.17 | 7000 | 0.4991 | 0.7488 | 0.749 |
| 0.4207 | 30.0 | 7200 | 0.4955 | 0.7517 | 0.752 |
| 0.4191 | 30.83 | 7400 | 0.5034 | 0.7482 | 0.749 |
| 0.4166 | 31.67 | 7600 | 0.4966 | 0.7528 | 0.753 |
| 0.4186 | 32.5 | 7800 | 0.4978 | 0.7528 | 0.753 |
| 0.4165 | 33.33 | 8000 | 0.4988 | 0.7518 | 0.752 |
| 0.4204 | 34.17 | 8200 | 0.4949 | 0.7487 | 0.749 |
| 0.413 | 35.0 | 8400 | 0.4975 | 0.7508 | 0.751 |
| 0.417 | 35.83 | 8600 | 0.4952 | 0.7478 | 0.748 |
| 0.4172 | 36.67 | 8800 | 0.4971 | 0.7467 | 0.747 |
| 0.4101 | 37.5 | 9000 | 0.5015 | 0.7530 | 0.754 |
| 0.4141 | 38.33 | 9200 | 0.4980 | 0.7517 | 0.752 |
| 0.4116 | 39.17 | 9400 | 0.4992 | 0.7517 | 0.752 |
| 0.4143 | 40.0 | 9600 | 0.4989 | 0.7507 | 0.751 |
| 0.4135 | 40.83 | 9800 | 0.4982 | 0.7508 | 0.751 |
| 0.4122 | 41.67 | 10000 | 0.4985 | 0.7516 | 0.752 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:05:36+00:00 |
null | null | {} | ai-tools-searchs/BB | null | [
"region:us"
]
| null | 2024-04-27T06:05:50+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3424
- F1 Score: 0.8517
- Accuracy: 0.852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5854 | 1.34 | 200 | 0.5537 | 0.7169 | 0.717 |
| 0.5051 | 2.68 | 400 | 0.5201 | 0.7266 | 0.727 |
| 0.4902 | 4.03 | 600 | 0.5080 | 0.7427 | 0.743 |
| 0.4742 | 5.37 | 800 | 0.5060 | 0.7509 | 0.751 |
| 0.4624 | 6.71 | 1000 | 0.4862 | 0.7540 | 0.754 |
| 0.4554 | 8.05 | 1200 | 0.4918 | 0.7601 | 0.761 |
| 0.4482 | 9.4 | 1400 | 0.4795 | 0.7689 | 0.769 |
| 0.4448 | 10.74 | 1600 | 0.4757 | 0.7639 | 0.764 |
| 0.4376 | 12.08 | 1800 | 0.4773 | 0.7739 | 0.774 |
| 0.4382 | 13.42 | 2000 | 0.4706 | 0.7617 | 0.762 |
| 0.4265 | 14.77 | 2200 | 0.4875 | 0.7599 | 0.761 |
| 0.4297 | 16.11 | 2400 | 0.4678 | 0.7730 | 0.773 |
| 0.4246 | 17.45 | 2600 | 0.4689 | 0.7749 | 0.775 |
| 0.4242 | 18.79 | 2800 | 0.4708 | 0.7727 | 0.773 |
| 0.4251 | 20.13 | 3000 | 0.4730 | 0.7694 | 0.77 |
| 0.4188 | 21.48 | 3200 | 0.4637 | 0.7739 | 0.774 |
| 0.4162 | 22.82 | 3400 | 0.4657 | 0.7729 | 0.773 |
| 0.416 | 24.16 | 3600 | 0.4613 | 0.7730 | 0.773 |
| 0.4182 | 25.5 | 3800 | 0.4592 | 0.7840 | 0.784 |
| 0.4112 | 26.85 | 4000 | 0.4655 | 0.7747 | 0.775 |
| 0.4128 | 28.19 | 4200 | 0.4651 | 0.7738 | 0.774 |
| 0.4061 | 29.53 | 4400 | 0.4662 | 0.7788 | 0.779 |
| 0.4098 | 30.87 | 4600 | 0.4586 | 0.7809 | 0.781 |
| 0.4102 | 32.21 | 4800 | 0.4567 | 0.7819 | 0.782 |
| 0.4037 | 33.56 | 5000 | 0.4619 | 0.7840 | 0.784 |
| 0.407 | 34.9 | 5200 | 0.4613 | 0.7850 | 0.785 |
| 0.4086 | 36.24 | 5400 | 0.4580 | 0.784 | 0.784 |
| 0.4021 | 37.58 | 5600 | 0.4589 | 0.7820 | 0.782 |
| 0.4039 | 38.93 | 5800 | 0.4641 | 0.7767 | 0.777 |
| 0.4008 | 40.27 | 6000 | 0.4613 | 0.7800 | 0.78 |
| 0.4015 | 41.61 | 6200 | 0.4617 | 0.7798 | 0.78 |
| 0.4019 | 42.95 | 6400 | 0.4610 | 0.7848 | 0.785 |
| 0.403 | 44.3 | 6600 | 0.4558 | 0.7860 | 0.786 |
| 0.3985 | 45.64 | 6800 | 0.4609 | 0.7878 | 0.788 |
| 0.4003 | 46.98 | 7000 | 0.4631 | 0.7847 | 0.785 |
| 0.4027 | 48.32 | 7200 | 0.4612 | 0.7817 | 0.782 |
| 0.3962 | 49.66 | 7400 | 0.4619 | 0.7825 | 0.783 |
| 0.3925 | 51.01 | 7600 | 0.4575 | 0.7829 | 0.783 |
| 0.3959 | 52.35 | 7800 | 0.4566 | 0.79 | 0.79 |
| 0.3929 | 53.69 | 8000 | 0.4631 | 0.7826 | 0.783 |
| 0.3971 | 55.03 | 8200 | 0.4689 | 0.7783 | 0.779 |
| 0.3944 | 56.38 | 8400 | 0.4611 | 0.7827 | 0.783 |
| 0.3944 | 57.72 | 8600 | 0.4564 | 0.7900 | 0.79 |
| 0.3948 | 59.06 | 8800 | 0.4602 | 0.7807 | 0.781 |
| 0.3919 | 60.4 | 9000 | 0.4594 | 0.7808 | 0.781 |
| 0.3945 | 61.74 | 9200 | 0.4573 | 0.7829 | 0.783 |
| 0.3947 | 63.09 | 9400 | 0.4594 | 0.7778 | 0.778 |
| 0.395 | 64.43 | 9600 | 0.4566 | 0.7829 | 0.783 |
| 0.39 | 65.77 | 9800 | 0.4578 | 0.7809 | 0.781 |
| 0.3899 | 67.11 | 10000 | 0.4582 | 0.7809 | 0.781 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:06:17+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | nanxiangzifeng/test | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:06:29+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3598
- F1 Score: 0.8480
- Accuracy: 0.848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5483 | 1.34 | 200 | 0.5224 | 0.7330 | 0.734 |
| 0.472 | 2.68 | 400 | 0.4887 | 0.7609 | 0.761 |
| 0.4543 | 4.03 | 600 | 0.4795 | 0.7639 | 0.764 |
| 0.4372 | 5.37 | 800 | 0.4842 | 0.7695 | 0.77 |
| 0.4264 | 6.71 | 1000 | 0.4702 | 0.778 | 0.778 |
| 0.4206 | 8.05 | 1200 | 0.4753 | 0.7693 | 0.77 |
| 0.414 | 9.4 | 1400 | 0.4646 | 0.7717 | 0.772 |
| 0.4068 | 10.74 | 1600 | 0.4649 | 0.7785 | 0.779 |
| 0.4021 | 12.08 | 1800 | 0.4631 | 0.7840 | 0.784 |
| 0.3979 | 13.42 | 2000 | 0.4624 | 0.7739 | 0.775 |
| 0.387 | 14.77 | 2200 | 0.4719 | 0.7798 | 0.781 |
| 0.3869 | 16.11 | 2400 | 0.4515 | 0.7790 | 0.779 |
| 0.3779 | 17.45 | 2600 | 0.4681 | 0.7760 | 0.777 |
| 0.3785 | 18.79 | 2800 | 0.4608 | 0.7838 | 0.784 |
| 0.3752 | 20.13 | 3000 | 0.4694 | 0.7787 | 0.78 |
| 0.3677 | 21.48 | 3200 | 0.4535 | 0.7949 | 0.795 |
| 0.3626 | 22.82 | 3400 | 0.4574 | 0.7979 | 0.798 |
| 0.3594 | 24.16 | 3600 | 0.4475 | 0.7980 | 0.798 |
| 0.3547 | 25.5 | 3800 | 0.4535 | 0.7910 | 0.791 |
| 0.3476 | 26.85 | 4000 | 0.4552 | 0.7998 | 0.8 |
| 0.3481 | 28.19 | 4200 | 0.4633 | 0.7926 | 0.793 |
| 0.3391 | 29.53 | 4400 | 0.4584 | 0.7988 | 0.799 |
| 0.3389 | 30.87 | 4600 | 0.4667 | 0.7949 | 0.796 |
| 0.3374 | 32.21 | 4800 | 0.4561 | 0.7965 | 0.797 |
| 0.3307 | 33.56 | 5000 | 0.4695 | 0.7985 | 0.799 |
| 0.3335 | 34.9 | 5200 | 0.4568 | 0.8008 | 0.801 |
| 0.3299 | 36.24 | 5400 | 0.4493 | 0.7989 | 0.799 |
| 0.3214 | 37.58 | 5600 | 0.4522 | 0.8027 | 0.803 |
| 0.3222 | 38.93 | 5800 | 0.4559 | 0.7958 | 0.796 |
| 0.3172 | 40.27 | 6000 | 0.4492 | 0.7939 | 0.794 |
| 0.3139 | 41.61 | 6200 | 0.4699 | 0.7957 | 0.796 |
| 0.3151 | 42.95 | 6400 | 0.4662 | 0.7943 | 0.795 |
| 0.3146 | 44.3 | 6600 | 0.4521 | 0.8029 | 0.803 |
| 0.3088 | 45.64 | 6800 | 0.4535 | 0.7968 | 0.797 |
| 0.3066 | 46.98 | 7000 | 0.4643 | 0.7965 | 0.797 |
| 0.3064 | 48.32 | 7200 | 0.4512 | 0.8049 | 0.805 |
| 0.3033 | 49.66 | 7400 | 0.4592 | 0.8007 | 0.801 |
| 0.3024 | 51.01 | 7600 | 0.4569 | 0.8006 | 0.801 |
| 0.2991 | 52.35 | 7800 | 0.4457 | 0.8140 | 0.814 |
| 0.2948 | 53.69 | 8000 | 0.4808 | 0.7932 | 0.794 |
| 0.2969 | 55.03 | 8200 | 0.4788 | 0.7901 | 0.791 |
| 0.2953 | 56.38 | 8400 | 0.4647 | 0.8027 | 0.803 |
| 0.2946 | 57.72 | 8600 | 0.4582 | 0.8058 | 0.806 |
| 0.2931 | 59.06 | 8800 | 0.4634 | 0.8017 | 0.802 |
| 0.2901 | 60.4 | 9000 | 0.4639 | 0.8068 | 0.807 |
| 0.2909 | 61.74 | 9200 | 0.4583 | 0.8080 | 0.808 |
| 0.2918 | 63.09 | 9400 | 0.4634 | 0.8037 | 0.804 |
| 0.2897 | 64.43 | 9600 | 0.4629 | 0.8047 | 0.805 |
| 0.286 | 65.77 | 9800 | 0.4610 | 0.8098 | 0.81 |
| 0.2892 | 67.11 | 10000 | 0.4608 | 0.8098 | 0.81 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:06:55+00:00 |
null | null | {"license": "openrail"} | MinLeo/JONGSEOB-AllRounder | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T06:07:31+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | uniiiii/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:08:28+00:00 |
text-generation | transformers | Quantizations of https://huggingface.co/Nexusflow/Starling-LM-7B-beta
# From original readme
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
``` | {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "Starling-LM-7B-beta"], "pipeline_tag": "text-generation", "inference": false} | duyntnet/Starling-LM-7B-beta-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"Starling-LM-7B-beta",
"text-generation",
"en",
"license:other",
"region:us"
]
| null | 2024-04-27T06:09:05+00:00 |
null | null | {} | hb1115/llama-2-7b-ghl-support-1epoch | null | [
"region:us"
]
| null | 2024-04-27T06:10:37+00:00 |
|
null | null | {"license": "openrail"} | Loren85/PG-Tth-Owl-House-Pilot-leked | null | [
"license:openrail",
"region:us"
]
| null | 2024-04-27T06:14:39+00:00 |
|
null | null | {} | uniiiii/wav2vec2-large-xlsr-turkish-demo-colab | null | [
"region:us"
]
| null | 2024-04-27T06:15:01+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- F1 Score: 0.8429
- Accuracy: 0.843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5298 | 1.34 | 200 | 0.5022 | 0.7565 | 0.757 |
| 0.4547 | 2.68 | 400 | 0.4890 | 0.7627 | 0.764 |
| 0.4374 | 4.03 | 600 | 0.4695 | 0.7688 | 0.769 |
| 0.417 | 5.37 | 800 | 0.4740 | 0.7803 | 0.781 |
| 0.4019 | 6.71 | 1000 | 0.4525 | 0.7909 | 0.791 |
| 0.3911 | 8.05 | 1200 | 0.4531 | 0.7927 | 0.793 |
| 0.3802 | 9.4 | 1400 | 0.4492 | 0.7999 | 0.8 |
| 0.3654 | 10.74 | 1600 | 0.4430 | 0.8068 | 0.807 |
| 0.3567 | 12.08 | 1800 | 0.4510 | 0.8098 | 0.81 |
| 0.3443 | 13.42 | 2000 | 0.4679 | 0.7884 | 0.79 |
| 0.3297 | 14.77 | 2200 | 0.4379 | 0.8086 | 0.809 |
| 0.3209 | 16.11 | 2400 | 0.4293 | 0.8140 | 0.814 |
| 0.3056 | 17.45 | 2600 | 0.4517 | 0.8065 | 0.807 |
| 0.2973 | 18.79 | 2800 | 0.4328 | 0.8200 | 0.82 |
| 0.2904 | 20.13 | 3000 | 0.4694 | 0.7990 | 0.8 |
| 0.2822 | 21.48 | 3200 | 0.4324 | 0.8220 | 0.822 |
| 0.2649 | 22.82 | 3400 | 0.4480 | 0.8199 | 0.82 |
| 0.2603 | 24.16 | 3600 | 0.4315 | 0.826 | 0.826 |
| 0.25 | 25.5 | 3800 | 0.4434 | 0.8290 | 0.829 |
| 0.2421 | 26.85 | 4000 | 0.4351 | 0.8370 | 0.837 |
| 0.2383 | 28.19 | 4200 | 0.4811 | 0.8113 | 0.812 |
| 0.2286 | 29.53 | 4400 | 0.4528 | 0.8419 | 0.842 |
| 0.2263 | 30.87 | 4600 | 0.4559 | 0.8269 | 0.827 |
| 0.2144 | 32.21 | 4800 | 0.4749 | 0.8309 | 0.831 |
| 0.2087 | 33.56 | 5000 | 0.4811 | 0.8400 | 0.84 |
| 0.209 | 34.9 | 5200 | 0.4559 | 0.8390 | 0.839 |
| 0.2005 | 36.24 | 5400 | 0.4649 | 0.8510 | 0.851 |
| 0.1936 | 37.58 | 5600 | 0.4457 | 0.8470 | 0.847 |
| 0.1885 | 38.93 | 5800 | 0.4884 | 0.8449 | 0.845 |
| 0.1823 | 40.27 | 6000 | 0.4702 | 0.8519 | 0.852 |
| 0.1812 | 41.61 | 6200 | 0.4743 | 0.8450 | 0.845 |
| 0.1769 | 42.95 | 6400 | 0.4743 | 0.8530 | 0.853 |
| 0.1747 | 44.3 | 6600 | 0.4964 | 0.8560 | 0.856 |
| 0.1684 | 45.64 | 6800 | 0.4925 | 0.8530 | 0.853 |
| 0.1649 | 46.98 | 7000 | 0.4920 | 0.8550 | 0.855 |
| 0.1642 | 48.32 | 7200 | 0.4878 | 0.8590 | 0.859 |
| 0.1606 | 49.66 | 7400 | 0.4807 | 0.8550 | 0.855 |
| 0.1583 | 51.01 | 7600 | 0.4972 | 0.8560 | 0.856 |
| 0.1553 | 52.35 | 7800 | 0.5003 | 0.8570 | 0.857 |
| 0.1473 | 53.69 | 8000 | 0.5045 | 0.8580 | 0.858 |
| 0.1492 | 55.03 | 8200 | 0.5266 | 0.8560 | 0.856 |
| 0.1442 | 56.38 | 8400 | 0.5160 | 0.858 | 0.858 |
| 0.1469 | 57.72 | 8600 | 0.5068 | 0.8560 | 0.856 |
| 0.1392 | 59.06 | 8800 | 0.5262 | 0.8540 | 0.854 |
| 0.1418 | 60.4 | 9000 | 0.5185 | 0.8560 | 0.856 |
| 0.1414 | 61.74 | 9200 | 0.5193 | 0.8570 | 0.857 |
| 0.1344 | 63.09 | 9400 | 0.5241 | 0.8560 | 0.856 |
| 0.138 | 64.43 | 9600 | 0.5215 | 0.8520 | 0.852 |
| 0.1358 | 65.77 | 9800 | 0.5252 | 0.8590 | 0.859 |
| 0.133 | 67.11 | 10000 | 0.5244 | 0.8600 | 0.86 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:16:12+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5449
- F1 Score: 0.7169
- Accuracy: 0.719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6467 | 0.93 | 200 | 0.5802 | 0.6970 | 0.697 |
| 0.6083 | 1.87 | 400 | 0.5770 | 0.6976 | 0.698 |
| 0.5971 | 2.8 | 600 | 0.5605 | 0.7075 | 0.71 |
| 0.5911 | 3.74 | 800 | 0.5674 | 0.7031 | 0.703 |
| 0.5882 | 4.67 | 1000 | 0.5624 | 0.7079 | 0.708 |
| 0.5847 | 5.61 | 1200 | 0.5655 | 0.7009 | 0.701 |
| 0.5793 | 6.54 | 1400 | 0.5616 | 0.7069 | 0.707 |
| 0.5799 | 7.48 | 1600 | 0.5653 | 0.6941 | 0.694 |
| 0.5761 | 8.41 | 1800 | 0.5666 | 0.6910 | 0.691 |
| 0.5804 | 9.35 | 2000 | 0.5602 | 0.7049 | 0.705 |
| 0.5732 | 10.28 | 2200 | 0.5661 | 0.6960 | 0.696 |
| 0.5722 | 11.21 | 2400 | 0.5587 | 0.7025 | 0.703 |
| 0.5725 | 12.15 | 2600 | 0.5505 | 0.7104 | 0.713 |
| 0.5685 | 13.08 | 2800 | 0.5540 | 0.7074 | 0.709 |
| 0.5701 | 14.02 | 3000 | 0.5515 | 0.7068 | 0.708 |
| 0.5692 | 14.95 | 3200 | 0.5517 | 0.7037 | 0.705 |
| 0.5678 | 15.89 | 3400 | 0.5511 | 0.7025 | 0.703 |
| 0.5654 | 16.82 | 3600 | 0.5562 | 0.6989 | 0.699 |
| 0.5647 | 17.76 | 3800 | 0.5499 | 0.7058 | 0.707 |
| 0.5657 | 18.69 | 4000 | 0.5540 | 0.7049 | 0.705 |
| 0.5623 | 19.63 | 4200 | 0.5523 | 0.7000 | 0.704 |
| 0.5647 | 20.56 | 4400 | 0.5500 | 0.7035 | 0.705 |
| 0.5615 | 21.5 | 4600 | 0.5620 | 0.6965 | 0.697 |
| 0.5596 | 22.43 | 4800 | 0.5545 | 0.7046 | 0.705 |
| 0.5639 | 23.36 | 5000 | 0.5541 | 0.6960 | 0.696 |
| 0.561 | 24.3 | 5200 | 0.5589 | 0.6879 | 0.688 |
| 0.5563 | 25.23 | 5400 | 0.5528 | 0.7071 | 0.709 |
| 0.5629 | 26.17 | 5600 | 0.5498 | 0.7035 | 0.704 |
| 0.5544 | 27.1 | 5800 | 0.5487 | 0.7110 | 0.713 |
| 0.5561 | 28.04 | 6000 | 0.5506 | 0.7045 | 0.705 |
| 0.5545 | 28.97 | 6200 | 0.5551 | 0.6971 | 0.697 |
| 0.5585 | 29.91 | 6400 | 0.5513 | 0.6987 | 0.699 |
| 0.5568 | 30.84 | 6600 | 0.5506 | 0.7056 | 0.706 |
| 0.5548 | 31.78 | 6800 | 0.5540 | 0.702 | 0.702 |
| 0.5545 | 32.71 | 7000 | 0.5514 | 0.7054 | 0.706 |
| 0.5582 | 33.64 | 7200 | 0.5486 | 0.7001 | 0.701 |
| 0.5502 | 34.58 | 7400 | 0.5543 | 0.6971 | 0.697 |
| 0.558 | 35.51 | 7600 | 0.5483 | 0.7028 | 0.703 |
| 0.5565 | 36.45 | 7800 | 0.5519 | 0.6999 | 0.7 |
| 0.5552 | 37.38 | 8000 | 0.5486 | 0.7018 | 0.702 |
| 0.5502 | 38.32 | 8200 | 0.5507 | 0.6990 | 0.7 |
| 0.5546 | 39.25 | 8400 | 0.5517 | 0.7107 | 0.711 |
| 0.5534 | 40.19 | 8600 | 0.5504 | 0.7084 | 0.709 |
| 0.5525 | 41.12 | 8800 | 0.5502 | 0.7086 | 0.709 |
| 0.5524 | 42.06 | 9000 | 0.5508 | 0.7056 | 0.706 |
| 0.5529 | 42.99 | 9200 | 0.5511 | 0.7069 | 0.707 |
| 0.5515 | 43.93 | 9400 | 0.5527 | 0.7040 | 0.704 |
| 0.5509 | 44.86 | 9600 | 0.5508 | 0.7068 | 0.707 |
| 0.554 | 45.79 | 9800 | 0.5511 | 0.7068 | 0.707 |
| 0.5475 | 46.73 | 10000 | 0.5519 | 0.7038 | 0.704 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:16:12+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5253
- F1 Score: 0.7283
- Accuracy: 0.73
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6224 | 0.93 | 200 | 0.5583 | 0.7092 | 0.712 |
| 0.5873 | 1.87 | 400 | 0.5816 | 0.6804 | 0.684 |
| 0.5777 | 2.8 | 600 | 0.5465 | 0.7045 | 0.705 |
| 0.5694 | 3.74 | 800 | 0.5644 | 0.6925 | 0.693 |
| 0.5634 | 4.67 | 1000 | 0.5486 | 0.7019 | 0.702 |
| 0.5573 | 5.61 | 1200 | 0.5394 | 0.7164 | 0.72 |
| 0.5498 | 6.54 | 1400 | 0.5508 | 0.6930 | 0.693 |
| 0.5461 | 7.48 | 1600 | 0.5399 | 0.7098 | 0.71 |
| 0.539 | 8.41 | 1800 | 0.5401 | 0.7089 | 0.709 |
| 0.5408 | 9.35 | 2000 | 0.5442 | 0.7122 | 0.714 |
| 0.5303 | 10.28 | 2200 | 0.5315 | 0.7169 | 0.717 |
| 0.5259 | 11.21 | 2400 | 0.5553 | 0.7148 | 0.715 |
| 0.5175 | 12.15 | 2600 | 0.5496 | 0.7211 | 0.724 |
| 0.5134 | 13.08 | 2800 | 0.5447 | 0.7139 | 0.717 |
| 0.5102 | 14.02 | 3000 | 0.5330 | 0.7248 | 0.725 |
| 0.5038 | 14.95 | 3200 | 0.5366 | 0.7201 | 0.721 |
| 0.5009 | 15.89 | 3400 | 0.5310 | 0.7278 | 0.728 |
| 0.4952 | 16.82 | 3600 | 0.5506 | 0.7161 | 0.716 |
| 0.4919 | 17.76 | 3800 | 0.5353 | 0.7388 | 0.739 |
| 0.4871 | 18.69 | 4000 | 0.5521 | 0.71 | 0.71 |
| 0.4785 | 19.63 | 4200 | 0.5350 | 0.7376 | 0.738 |
| 0.4785 | 20.56 | 4400 | 0.5581 | 0.7181 | 0.718 |
| 0.4698 | 21.5 | 4600 | 0.5795 | 0.7015 | 0.702 |
| 0.4645 | 22.43 | 4800 | 0.5629 | 0.7243 | 0.725 |
| 0.464 | 23.36 | 5000 | 0.5929 | 0.7088 | 0.709 |
| 0.4578 | 24.3 | 5200 | 0.5819 | 0.7021 | 0.703 |
| 0.4504 | 25.23 | 5400 | 0.6046 | 0.7011 | 0.701 |
| 0.454 | 26.17 | 5600 | 0.5637 | 0.7189 | 0.719 |
| 0.445 | 27.1 | 5800 | 0.5777 | 0.7151 | 0.715 |
| 0.4441 | 28.04 | 6000 | 0.5787 | 0.7029 | 0.703 |
| 0.4376 | 28.97 | 6200 | 0.5924 | 0.7131 | 0.713 |
| 0.4383 | 29.91 | 6400 | 0.5811 | 0.7180 | 0.718 |
| 0.4348 | 30.84 | 6600 | 0.5807 | 0.7061 | 0.706 |
| 0.4307 | 31.78 | 6800 | 0.5864 | 0.7069 | 0.707 |
| 0.4262 | 32.71 | 7000 | 0.5827 | 0.7080 | 0.708 |
| 0.4272 | 33.64 | 7200 | 0.5802 | 0.7069 | 0.707 |
| 0.4171 | 34.58 | 7400 | 0.6025 | 0.7005 | 0.702 |
| 0.4225 | 35.51 | 7600 | 0.5901 | 0.7107 | 0.711 |
| 0.4195 | 36.45 | 7800 | 0.6142 | 0.712 | 0.712 |
| 0.4165 | 37.38 | 8000 | 0.6216 | 0.7058 | 0.706 |
| 0.4121 | 38.32 | 8200 | 0.6197 | 0.7081 | 0.708 |
| 0.4092 | 39.25 | 8400 | 0.6197 | 0.7109 | 0.711 |
| 0.4064 | 40.19 | 8600 | 0.6171 | 0.7039 | 0.704 |
| 0.4048 | 41.12 | 8800 | 0.6202 | 0.7101 | 0.71 |
| 0.4053 | 42.06 | 9000 | 0.6268 | 0.6980 | 0.698 |
| 0.4027 | 42.99 | 9200 | 0.6163 | 0.7049 | 0.705 |
| 0.4018 | 43.93 | 9400 | 0.6286 | 0.7048 | 0.705 |
| 0.3973 | 44.86 | 9600 | 0.6287 | 0.7050 | 0.705 |
| 0.4001 | 45.79 | 9800 | 0.6281 | 0.7060 | 0.706 |
| 0.3952 | 46.73 | 10000 | 0.6272 | 0.7090 | 0.709 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:16:29+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5216
- F1 Score: 0.7278
- Accuracy: 0.729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6304 | 0.93 | 200 | 0.5663 | 0.6962 | 0.697 |
| 0.5938 | 1.87 | 400 | 0.5883 | 0.6789 | 0.682 |
| 0.5851 | 2.8 | 600 | 0.5496 | 0.7129 | 0.715 |
| 0.5787 | 3.74 | 800 | 0.5671 | 0.6956 | 0.696 |
| 0.5754 | 4.67 | 1000 | 0.5540 | 0.7041 | 0.704 |
| 0.5706 | 5.61 | 1200 | 0.5475 | 0.6985 | 0.699 |
| 0.5638 | 6.54 | 1400 | 0.5516 | 0.7010 | 0.701 |
| 0.5629 | 7.48 | 1600 | 0.5494 | 0.7051 | 0.705 |
| 0.5583 | 8.41 | 1800 | 0.5522 | 0.6981 | 0.698 |
| 0.5629 | 9.35 | 2000 | 0.5488 | 0.7014 | 0.703 |
| 0.5536 | 10.28 | 2200 | 0.5497 | 0.7060 | 0.706 |
| 0.5516 | 11.21 | 2400 | 0.5589 | 0.7027 | 0.703 |
| 0.5508 | 12.15 | 2600 | 0.5410 | 0.7070 | 0.71 |
| 0.545 | 13.08 | 2800 | 0.5533 | 0.7074 | 0.712 |
| 0.5459 | 14.02 | 3000 | 0.5426 | 0.7043 | 0.705 |
| 0.5429 | 14.95 | 3200 | 0.5418 | 0.7083 | 0.711 |
| 0.5423 | 15.89 | 3400 | 0.5361 | 0.7122 | 0.713 |
| 0.5388 | 16.82 | 3600 | 0.5499 | 0.7093 | 0.71 |
| 0.5381 | 17.76 | 3800 | 0.5418 | 0.7059 | 0.708 |
| 0.5374 | 18.69 | 4000 | 0.5519 | 0.7041 | 0.704 |
| 0.5325 | 19.63 | 4200 | 0.5406 | 0.7118 | 0.715 |
| 0.5342 | 20.56 | 4400 | 0.5422 | 0.7053 | 0.706 |
| 0.5281 | 21.5 | 4600 | 0.5574 | 0.6975 | 0.698 |
| 0.5259 | 22.43 | 4800 | 0.5524 | 0.7069 | 0.708 |
| 0.5313 | 23.36 | 5000 | 0.5647 | 0.7020 | 0.702 |
| 0.5252 | 24.3 | 5200 | 0.5607 | 0.7050 | 0.706 |
| 0.5197 | 25.23 | 5400 | 0.5651 | 0.7112 | 0.712 |
| 0.5261 | 26.17 | 5600 | 0.5460 | 0.7165 | 0.717 |
| 0.5185 | 27.1 | 5800 | 0.5513 | 0.7096 | 0.71 |
| 0.519 | 28.04 | 6000 | 0.5565 | 0.7080 | 0.708 |
| 0.5155 | 28.97 | 6200 | 0.5603 | 0.7081 | 0.708 |
| 0.5191 | 29.91 | 6400 | 0.5500 | 0.7175 | 0.718 |
| 0.5181 | 30.84 | 6600 | 0.5504 | 0.7119 | 0.712 |
| 0.5134 | 31.78 | 6800 | 0.5602 | 0.7051 | 0.705 |
| 0.5147 | 32.71 | 7000 | 0.5548 | 0.7119 | 0.712 |
| 0.5155 | 33.64 | 7200 | 0.5516 | 0.7051 | 0.705 |
| 0.5056 | 34.58 | 7400 | 0.5622 | 0.6995 | 0.7 |
| 0.5148 | 35.51 | 7600 | 0.5555 | 0.7081 | 0.708 |
| 0.5112 | 36.45 | 7800 | 0.5629 | 0.7060 | 0.706 |
| 0.5112 | 37.38 | 8000 | 0.5522 | 0.7091 | 0.709 |
| 0.5062 | 38.32 | 8200 | 0.5634 | 0.7090 | 0.709 |
| 0.5075 | 39.25 | 8400 | 0.5607 | 0.7011 | 0.701 |
| 0.5055 | 40.19 | 8600 | 0.5566 | 0.7061 | 0.706 |
| 0.5047 | 41.12 | 8800 | 0.5585 | 0.7090 | 0.709 |
| 0.5065 | 42.06 | 9000 | 0.5600 | 0.7080 | 0.708 |
| 0.5049 | 42.99 | 9200 | 0.5601 | 0.7021 | 0.702 |
| 0.5049 | 43.93 | 9400 | 0.5579 | 0.7071 | 0.707 |
| 0.5032 | 44.86 | 9600 | 0.5576 | 0.7081 | 0.708 |
| 0.5063 | 45.79 | 9800 | 0.5600 | 0.7071 | 0.707 |
| 0.5001 | 46.73 | 10000 | 0.5618 | 0.7061 | 0.706 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:16:30+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | swj0419/bbc_retrain_new_STEP0000050 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T06:17:04+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4464
- F1 Score: 0.7959
- Accuracy: 0.796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5932 | 1.34 | 200 | 0.5470 | 0.7153 | 0.718 |
| 0.5448 | 2.68 | 400 | 0.5315 | 0.7370 | 0.737 |
| 0.5311 | 4.03 | 600 | 0.5281 | 0.7360 | 0.736 |
| 0.5254 | 5.37 | 800 | 0.5206 | 0.7400 | 0.74 |
| 0.522 | 6.71 | 1000 | 0.5165 | 0.7498 | 0.75 |
| 0.516 | 8.05 | 1200 | 0.5191 | 0.7451 | 0.746 |
| 0.5096 | 9.4 | 1400 | 0.5097 | 0.7499 | 0.75 |
| 0.5084 | 10.74 | 1600 | 0.5063 | 0.7479 | 0.748 |
| 0.5065 | 12.08 | 1800 | 0.5223 | 0.7481 | 0.749 |
| 0.5047 | 13.42 | 2000 | 0.5103 | 0.7469 | 0.747 |
| 0.5033 | 14.77 | 2200 | 0.5049 | 0.7520 | 0.753 |
| 0.4965 | 16.11 | 2400 | 0.5122 | 0.7526 | 0.753 |
| 0.4974 | 17.45 | 2600 | 0.5050 | 0.7537 | 0.754 |
| 0.4947 | 18.79 | 2800 | 0.5027 | 0.7478 | 0.748 |
| 0.4909 | 20.13 | 3000 | 0.5053 | 0.7460 | 0.746 |
| 0.4918 | 21.48 | 3200 | 0.5123 | 0.7519 | 0.752 |
| 0.4903 | 22.82 | 3400 | 0.5071 | 0.7530 | 0.753 |
| 0.4871 | 24.16 | 3600 | 0.5038 | 0.7456 | 0.746 |
| 0.4821 | 25.5 | 3800 | 0.5072 | 0.7488 | 0.749 |
| 0.4891 | 26.85 | 4000 | 0.5063 | 0.7511 | 0.752 |
| 0.4854 | 28.19 | 4200 | 0.5053 | 0.7549 | 0.755 |
| 0.4827 | 29.53 | 4400 | 0.5108 | 0.7490 | 0.749 |
| 0.4823 | 30.87 | 4600 | 0.5077 | 0.7530 | 0.753 |
| 0.4827 | 32.21 | 4800 | 0.5076 | 0.7487 | 0.749 |
| 0.4797 | 33.56 | 5000 | 0.5107 | 0.7558 | 0.756 |
| 0.4823 | 34.9 | 5200 | 0.5074 | 0.7550 | 0.755 |
| 0.4765 | 36.24 | 5400 | 0.5067 | 0.7527 | 0.753 |
| 0.481 | 37.58 | 5600 | 0.5042 | 0.7580 | 0.758 |
| 0.4767 | 38.93 | 5800 | 0.5042 | 0.7559 | 0.756 |
| 0.4756 | 40.27 | 6000 | 0.5029 | 0.7576 | 0.758 |
| 0.4763 | 41.61 | 6200 | 0.5035 | 0.7539 | 0.754 |
| 0.4761 | 42.95 | 6400 | 0.5079 | 0.7570 | 0.757 |
| 0.4737 | 44.3 | 6600 | 0.5080 | 0.7550 | 0.755 |
| 0.4767 | 45.64 | 6800 | 0.5121 | 0.7598 | 0.76 |
| 0.4739 | 46.98 | 7000 | 0.5067 | 0.7610 | 0.761 |
| 0.474 | 48.32 | 7200 | 0.5092 | 0.7600 | 0.76 |
| 0.4711 | 49.66 | 7400 | 0.5061 | 0.7610 | 0.761 |
| 0.4719 | 51.01 | 7600 | 0.5073 | 0.7640 | 0.764 |
| 0.4718 | 52.35 | 7800 | 0.5048 | 0.7528 | 0.753 |
| 0.4708 | 53.69 | 8000 | 0.5038 | 0.7548 | 0.755 |
| 0.4705 | 55.03 | 8200 | 0.5063 | 0.7610 | 0.761 |
| 0.472 | 56.38 | 8400 | 0.5058 | 0.76 | 0.76 |
| 0.4726 | 57.72 | 8600 | 0.5047 | 0.7549 | 0.755 |
| 0.4675 | 59.06 | 8800 | 0.5055 | 0.7560 | 0.756 |
| 0.4698 | 60.4 | 9000 | 0.5074 | 0.7620 | 0.762 |
| 0.469 | 61.74 | 9200 | 0.5046 | 0.7580 | 0.758 |
| 0.4726 | 63.09 | 9400 | 0.5054 | 0.7600 | 0.76 |
| 0.4676 | 64.43 | 9600 | 0.5057 | 0.7560 | 0.756 |
| 0.4726 | 65.77 | 9800 | 0.5063 | 0.7610 | 0.761 |
| 0.4663 | 67.11 | 10000 | 0.5057 | 0.7570 | 0.757 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:17:13+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold1
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0423
- Accuracy: 0.6478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4592 | 1.0 | 924 | 1.4398 | 0.5180 |
| 1.2561 | 2.0 | 1848 | 1.1998 | 0.5970 |
| 1.555 | 3.0 | 2772 | 1.1434 | 0.6079 |
| 1.1153 | 4.0 | 3696 | 1.0796 | 0.6366 |
| 1.0327 | 5.0 | 4620 | 1.0669 | 0.6426 |
| 0.8742 | 6.0 | 5544 | 1.0460 | 0.6453 |
| 0.7982 | 7.0 | 6468 | 1.0642 | 0.6393 |
| 0.8689 | 8.0 | 7392 | 1.0720 | 0.6396 |
| 0.7857 | 9.0 | 8316 | 1.0542 | 0.6445 |
| 0.7277 | 10.0 | 9240 | 1.0423 | 0.6478 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swinv2-tiny-patch4-window16-256", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold1", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6477611940298508, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swinv2-tiny-patch4-window16-256_fold1 | null | [
"transformers",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window16-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:17:16+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cmpktheo/gemma-2b-ft-G2E | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-04-27T06:18:02+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chat_sft_moren_dpo_z_moren_042713 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:20:03+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NDD-ppma_test-content_tags
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3296
- Accuracy: 0.7930
- F1: 0.8297
- Precision: 0.9284
- Recall: 0.7930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.162 | 0.9990 | 722 | 0.3935 | 0.7930 | 0.8297 | 0.9284 | 0.7930 |
| 0.1176 | 1.9979 | 1444 | 0.3296 | 0.7930 | 0.8297 | 0.9284 | 0.7930 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-ppma_test-content_tags", "results": []}]} | lgk03/NDD-ppma_test-content_tags | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:21:10+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4597
- F1 Score: 0.7837
- Accuracy: 0.784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5749 | 1.34 | 200 | 0.5307 | 0.7430 | 0.744 |
| 0.5292 | 2.68 | 400 | 0.5246 | 0.7435 | 0.744 |
| 0.5144 | 4.03 | 600 | 0.5179 | 0.7526 | 0.753 |
| 0.5057 | 5.37 | 800 | 0.5118 | 0.7549 | 0.755 |
| 0.5003 | 6.71 | 1000 | 0.5201 | 0.7573 | 0.758 |
| 0.494 | 8.05 | 1200 | 0.5002 | 0.7520 | 0.752 |
| 0.4871 | 9.4 | 1400 | 0.5042 | 0.7510 | 0.751 |
| 0.4837 | 10.74 | 1600 | 0.5001 | 0.7520 | 0.752 |
| 0.4818 | 12.08 | 1800 | 0.5124 | 0.7566 | 0.757 |
| 0.4764 | 13.42 | 2000 | 0.5040 | 0.7559 | 0.756 |
| 0.476 | 14.77 | 2200 | 0.5011 | 0.7326 | 0.734 |
| 0.4666 | 16.11 | 2400 | 0.5087 | 0.7499 | 0.75 |
| 0.4677 | 17.45 | 2600 | 0.4994 | 0.7336 | 0.734 |
| 0.4619 | 18.79 | 2800 | 0.4987 | 0.7365 | 0.737 |
| 0.4563 | 20.13 | 3000 | 0.5070 | 0.7400 | 0.74 |
| 0.4577 | 21.48 | 3200 | 0.5136 | 0.7447 | 0.745 |
| 0.4532 | 22.82 | 3400 | 0.5117 | 0.7410 | 0.741 |
| 0.4501 | 24.16 | 3600 | 0.5011 | 0.7350 | 0.735 |
| 0.443 | 25.5 | 3800 | 0.5074 | 0.7470 | 0.747 |
| 0.4472 | 26.85 | 4000 | 0.4981 | 0.7440 | 0.744 |
| 0.4446 | 28.19 | 4200 | 0.5097 | 0.7429 | 0.743 |
| 0.4392 | 29.53 | 4400 | 0.5106 | 0.7368 | 0.737 |
| 0.4349 | 30.87 | 4600 | 0.5061 | 0.7360 | 0.736 |
| 0.4352 | 32.21 | 4800 | 0.5051 | 0.7360 | 0.736 |
| 0.4317 | 33.56 | 5000 | 0.5136 | 0.7449 | 0.745 |
| 0.4318 | 34.9 | 5200 | 0.5131 | 0.7470 | 0.747 |
| 0.4255 | 36.24 | 5400 | 0.5059 | 0.7418 | 0.742 |
| 0.428 | 37.58 | 5600 | 0.5116 | 0.7419 | 0.742 |
| 0.4222 | 38.93 | 5800 | 0.5093 | 0.7369 | 0.737 |
| 0.4214 | 40.27 | 6000 | 0.5080 | 0.7368 | 0.737 |
| 0.4193 | 41.61 | 6200 | 0.5054 | 0.7447 | 0.745 |
| 0.4191 | 42.95 | 6400 | 0.5093 | 0.7500 | 0.75 |
| 0.4156 | 44.3 | 6600 | 0.5104 | 0.7439 | 0.744 |
| 0.4172 | 45.64 | 6800 | 0.5234 | 0.7535 | 0.754 |
| 0.4129 | 46.98 | 7000 | 0.5135 | 0.7529 | 0.753 |
| 0.4132 | 48.32 | 7200 | 0.5147 | 0.7530 | 0.753 |
| 0.4106 | 49.66 | 7400 | 0.5118 | 0.7560 | 0.756 |
| 0.4103 | 51.01 | 7600 | 0.5158 | 0.7510 | 0.751 |
| 0.4057 | 52.35 | 7800 | 0.5146 | 0.7448 | 0.745 |
| 0.4064 | 53.69 | 8000 | 0.5110 | 0.7459 | 0.746 |
| 0.4078 | 55.03 | 8200 | 0.5133 | 0.7470 | 0.747 |
| 0.4054 | 56.38 | 8400 | 0.5162 | 0.7530 | 0.753 |
| 0.4048 | 57.72 | 8600 | 0.5126 | 0.7480 | 0.748 |
| 0.4 | 59.06 | 8800 | 0.5131 | 0.7500 | 0.75 |
| 0.4016 | 60.4 | 9000 | 0.5184 | 0.7490 | 0.749 |
| 0.4004 | 61.74 | 9200 | 0.5147 | 0.7470 | 0.747 |
| 0.4038 | 63.09 | 9400 | 0.5179 | 0.7490 | 0.749 |
| 0.3989 | 64.43 | 9600 | 0.5157 | 0.7470 | 0.747 |
| 0.4009 | 65.77 | 9800 | 0.5170 | 0.7500 | 0.75 |
| 0.3977 | 67.11 | 10000 | 0.5158 | 0.7500 | 0.75 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
]
| null | 2024-04-27T06:22:16+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adapterhappy2sad-study-50-0.009 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-27T06:23:30+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.