pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TriadParty/deepsex-34b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/deepsex-34b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/deepsex-34b-GGUF/resolve/main/deepsex-34b.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["roleplay", "not-for-all-audiences"], "datasets": ["lemonilia/LimaRP", "PygmalionAI/PIPPA"], "base_model": "TriadParty/deepsex-34b", "quantized_by": "mradermacher"} | mradermacher/deepsex-34b-GGUF | null | [
"transformers",
"gguf",
"roleplay",
"not-for-all-audiences",
"en",
"dataset:lemonilia/LimaRP",
"dataset:PygmalionAI/PIPPA",
"base_model:TriadParty/deepsex-34b",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:47:37+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #roleplay #not-for-all-audiences #en #dataset-lemonilia/LimaRP #dataset-PygmalionAI/PIPPA #base_model-TriadParty/deepsex-34b #license-mit #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #roleplay #not-for-all-audiences #en #dataset-lemonilia/LimaRP #dataset-PygmalionAI/PIPPA #base_model-TriadParty/deepsex-34b #license-mit #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meeting_summarizer_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the dataset "huuuyeah/meetingbank".
It achieves the following results on the evaluation set:
- Loss: 2.3916
- Rouge1: 0.3517
- Rouge2: 0.2684
- Rougel: 0.3353
- Rougelsum: 0.3363
- Gen Len: 18.7564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 324 | 2.9030 | 0.2906 | 0.1982 | 0.2662 | 0.2663 | 18.9687 |
| 5.7333 | 2.0 | 648 | 2.5094 | 0.3313 | 0.2456 | 0.3132 | 0.3138 | 18.7506 |
| 5.7333 | 3.0 | 972 | 2.4188 | 0.3514 | 0.2673 | 0.3345 | 0.335 | 18.7749 |
| 3.9805 | 4.0 | 1296 | 2.3916 | 0.3517 | 0.2684 | 0.3353 | 0.3363 | 18.7564 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["huuuyeah/meetingbank"], "metrics": ["rouge"], "base_model": "google-t5/t5-small", "model-index": [{"name": "meeting_summarizer_model", "results": []}]} | cameronslee/meeting_summarizer_model | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:huuuyeah/meetingbank",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:47:40+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #en #dataset-huuuyeah/meetingbank #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| meeting\_summarizer\_model
==========================
This model is a fine-tuned version of google-t5/t5-small on the dataset "huuuyeah/meetingbank".
It achieves the following results on the evaluation set:
* Loss: 2.3916
* Rouge1: 0.3517
* Rouge2: 0.2684
* Rougel: 0.3353
* Rougelsum: 0.3363
* Gen Len: 18.7564
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #en #dataset-huuuyeah/meetingbank #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Aratako/Antler-7B-RP-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Antler-7B-RP-v2-GGUF/resolve/main/Antler-7B-RP-v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["grimulkan/LimaRP-augmented", "Aratako/Rosebleu-1on1-Dialogues-RP"], "base_model": "Aratako/Antler-7B-RP-v2", "quantized_by": "mradermacher"} | mradermacher/Antler-7B-RP-v2-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"dataset:grimulkan/LimaRP-augmented",
"dataset:Aratako/Rosebleu-1on1-Dialogues-RP",
"base_model:Aratako/Antler-7B-RP-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T20:49:44+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #not-for-all-audiences #nsfw #en #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Antler-7B-RP-v2 #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #not-for-all-audiences #nsfw #en #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Antler-7B-RP-v2 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOOwO/dumbo-krillin32 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T20:57:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Kousha/animated_pikachuHD_LORA
<Gallery />
## Model description
These are Kousha/animated_pikachuHD_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of PIK Pikachu to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Kousha/animated_pikachuHD_LORA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of PIK Pikachu", "widget": []} | Kousha/animated_pikachuHD_LORA | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-16T20:59:22+00:00 | [] | [] | TAGS
#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - Kousha/animated_pikachuHD_LORA
<Gallery />
## Model description
These are Kousha/animated_pikachuHD_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of PIK Pikachu to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# SDXL LoRA DreamBooth - Kousha/animated_pikachuHD_LORA\n\n<Gallery />",
"## Model description\n\nThese are Kousha/animated_pikachuHD_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of PIK Pikachu to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - Kousha/animated_pikachuHD_LORA\n\n<Gallery />",
"## Model description\n\nThese are Kousha/animated_pikachuHD_LORA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use a photo of PIK Pikachu to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Amit Tewari]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["legal"], "datasets": ["coastalcph/lex_glue"], "metrics": ["accuracy"], "pipeline_tag": "text-classification"} | AmitTewari/LegalPro-BERT-base | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"legal",
"en",
"dataset:coastalcph/lex_glue",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:03:25+00:00 | [
"1910.09700"
] | [
"en"
] | TAGS
#transformers #safetensors #bert #text-classification #legal #en #dataset-coastalcph/lex_glue #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: [Amit Tewari]
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: [Amit Tewari]\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #legal #en #dataset-coastalcph/lex_glue #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: [Amit Tewari]\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- F1 Score: 0.6319
- Accuracy: 0.6333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6657 | 14.29 | 200 | 0.6598 | 0.6051 | 0.6072 |
| 0.6154 | 28.57 | 400 | 0.6647 | 0.6108 | 0.6158 |
| 0.5934 | 42.86 | 600 | 0.6682 | 0.6200 | 0.6201 |
| 0.5734 | 57.14 | 800 | 0.6853 | 0.6183 | 0.6256 |
| 0.557 | 71.43 | 1000 | 0.6896 | 0.6215 | 0.6227 |
| 0.5459 | 85.71 | 1200 | 0.6811 | 0.6343 | 0.6370 |
| 0.5369 | 100.0 | 1400 | 0.6820 | 0.6286 | 0.6313 |
| 0.5309 | 114.29 | 1600 | 0.6716 | 0.6308 | 0.6304 |
| 0.5244 | 128.57 | 1800 | 0.6977 | 0.6356 | 0.6353 |
| 0.5192 | 142.86 | 2000 | 0.7043 | 0.6293 | 0.6313 |
| 0.5138 | 157.14 | 2200 | 0.6969 | 0.6343 | 0.6373 |
| 0.506 | 171.43 | 2400 | 0.7095 | 0.6315 | 0.6359 |
| 0.5017 | 185.71 | 2600 | 0.6929 | 0.6382 | 0.6385 |
| 0.4949 | 200.0 | 2800 | 0.6940 | 0.6360 | 0.6376 |
| 0.4873 | 214.29 | 3000 | 0.7157 | 0.6363 | 0.6402 |
| 0.4816 | 228.57 | 3200 | 0.7101 | 0.6393 | 0.6402 |
| 0.4731 | 242.86 | 3400 | 0.7249 | 0.6316 | 0.6359 |
| 0.468 | 257.14 | 3600 | 0.7389 | 0.6361 | 0.6390 |
| 0.4615 | 271.43 | 3800 | 0.7569 | 0.6404 | 0.6422 |
| 0.453 | 285.71 | 4000 | 0.7495 | 0.6352 | 0.6388 |
| 0.4481 | 300.0 | 4200 | 0.7490 | 0.6363 | 0.6396 |
| 0.4415 | 314.29 | 4400 | 0.7442 | 0.6384 | 0.6393 |
| 0.4338 | 328.57 | 4600 | 0.7543 | 0.6364 | 0.6385 |
| 0.4312 | 342.86 | 4800 | 0.7506 | 0.6351 | 0.6368 |
| 0.4247 | 357.14 | 5000 | 0.7713 | 0.6415 | 0.6436 |
| 0.4197 | 371.43 | 5200 | 0.8069 | 0.6420 | 0.6431 |
| 0.4147 | 385.71 | 5400 | 0.7809 | 0.6381 | 0.6396 |
| 0.4098 | 400.0 | 5600 | 0.7901 | 0.6393 | 0.6399 |
| 0.4041 | 414.29 | 5800 | 0.8033 | 0.6427 | 0.6442 |
| 0.4018 | 428.57 | 6000 | 0.7933 | 0.6368 | 0.6385 |
| 0.3976 | 442.86 | 6200 | 0.7965 | 0.6372 | 0.6393 |
| 0.3938 | 457.14 | 6400 | 0.8192 | 0.6380 | 0.6390 |
| 0.3896 | 471.43 | 6600 | 0.8040 | 0.6337 | 0.6362 |
| 0.3865 | 485.71 | 6800 | 0.8210 | 0.6319 | 0.6347 |
| 0.3838 | 500.0 | 7000 | 0.8119 | 0.6315 | 0.6333 |
| 0.3801 | 514.29 | 7200 | 0.8190 | 0.6328 | 0.6359 |
| 0.3766 | 528.57 | 7400 | 0.8166 | 0.6328 | 0.6342 |
| 0.3751 | 542.86 | 7600 | 0.8317 | 0.6301 | 0.6325 |
| 0.3734 | 557.14 | 7800 | 0.8229 | 0.6334 | 0.6353 |
| 0.3697 | 571.43 | 8000 | 0.8426 | 0.6344 | 0.6370 |
| 0.3683 | 585.71 | 8200 | 0.8210 | 0.6300 | 0.6313 |
| 0.3672 | 600.0 | 8400 | 0.8232 | 0.6301 | 0.6316 |
| 0.3666 | 614.29 | 8600 | 0.8302 | 0.6322 | 0.6336 |
| 0.3642 | 628.57 | 8800 | 0.8432 | 0.6307 | 0.6316 |
| 0.3618 | 642.86 | 9000 | 0.8426 | 0.6312 | 0.6322 |
| 0.3614 | 657.14 | 9200 | 0.8410 | 0.6319 | 0.6333 |
| 0.3599 | 671.43 | 9400 | 0.8359 | 0.6325 | 0.6333 |
| 0.3596 | 685.71 | 9600 | 0.8431 | 0.6339 | 0.6353 |
| 0.3585 | 700.0 | 9800 | 0.8419 | 0.6327 | 0.6342 |
| 0.3586 | 714.29 | 10000 | 0.8391 | 0.6315 | 0.6330 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:03:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_32768\_512\_30M-L32\_all
=====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8273
* F1 Score: 0.6319
* Accuracy: 0.6333
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cackerman/rewrites_gemma7b_it_4bit_ft_full_big | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:05:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** reciperesearch
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | reciperesearch/SFT_v0.1 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:06:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: reciperesearch
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: reciperesearch\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: reciperesearch\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Spaetzle-v68-7b
Spaetzle-v68-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [cstr/Spaetzle-v65-7b](https://huggingface.co/cstr/Spaetzle-v65-7b)
* [cstr/Spaetzle-v64-7b](https://huggingface.co/cstr/Spaetzle-v64-7b)
## 🧩 Configuration
```yaml
models:
- model: cstr/Spaetzle-v67-7b
# no parameters necessary for base model
- model: cstr/Spaetzle-v65-7b
parameters:
density: 0.60
weight: 0.30
- model: cstr/Spaetzle-v64-7b
parameters:
density: 0.65
weight: 0.30
merge_method: dare_ties
base_model: cstr/Spaetzle-v67-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v68-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "cstr/Spaetzle-v65-7b", "cstr/Spaetzle-v64-7b"], "base_model": ["cstr/Spaetzle-v65-7b", "cstr/Spaetzle-v64-7b"]} | cstr/Spaetzle-v68-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"cstr/Spaetzle-v65-7b",
"cstr/Spaetzle-v64-7b",
"conversational",
"base_model:cstr/Spaetzle-v65-7b",
"base_model:cstr/Spaetzle-v64-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:06:52+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #cstr/Spaetzle-v65-7b #cstr/Spaetzle-v64-7b #conversational #base_model-cstr/Spaetzle-v65-7b #base_model-cstr/Spaetzle-v64-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Spaetzle-v68-7b
Spaetzle-v68-7b is a merge of the following models using LazyMergekit:
* cstr/Spaetzle-v65-7b
* cstr/Spaetzle-v64-7b
## Configuration
## Usage
| [
"# Spaetzle-v68-7b\n\nSpaetzle-v68-7b is a merge of the following models using LazyMergekit:\n* cstr/Spaetzle-v65-7b\n* cstr/Spaetzle-v64-7b",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #cstr/Spaetzle-v65-7b #cstr/Spaetzle-v64-7b #conversational #base_model-cstr/Spaetzle-v65-7b #base_model-cstr/Spaetzle-v64-7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Spaetzle-v68-7b\n\nSpaetzle-v68-7b is a merge of the following models using LazyMergekit:\n* cstr/Spaetzle-v65-7b\n* cstr/Spaetzle-v64-7b",
"## Configuration",
"## Usage"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5036
- F1 Score: 0.5690
- Accuracy: 0.5704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6363 | 50.0 | 200 | 0.7461 | 0.6009 | 0.6025 |
| 0.494 | 100.0 | 400 | 0.8197 | 0.6182 | 0.6185 |
| 0.3859 | 150.0 | 600 | 0.9121 | 0.6058 | 0.6074 |
| 0.3371 | 200.0 | 800 | 0.9478 | 0.6161 | 0.6198 |
| 0.3078 | 250.0 | 1000 | 1.0104 | 0.6068 | 0.6074 |
| 0.289 | 300.0 | 1200 | 1.0733 | 0.6043 | 0.6049 |
| 0.2745 | 350.0 | 1400 | 1.0673 | 0.6158 | 0.6160 |
| 0.261 | 400.0 | 1600 | 1.1014 | 0.6087 | 0.6086 |
| 0.2499 | 450.0 | 1800 | 1.0635 | 0.6025 | 0.6025 |
| 0.2437 | 500.0 | 2000 | 1.2052 | 0.5911 | 0.5926 |
| 0.2329 | 550.0 | 2200 | 1.1563 | 0.6128 | 0.6136 |
| 0.2223 | 600.0 | 2400 | 1.2026 | 0.5976 | 0.5975 |
| 0.2149 | 650.0 | 2600 | 1.2167 | 0.5955 | 0.5963 |
| 0.2049 | 700.0 | 2800 | 1.2830 | 0.6036 | 0.6037 |
| 0.1963 | 750.0 | 3000 | 1.2509 | 0.5974 | 0.5988 |
| 0.1902 | 800.0 | 3200 | 1.2645 | 0.6087 | 0.6086 |
| 0.1794 | 850.0 | 3400 | 1.3172 | 0.6055 | 0.6062 |
| 0.1734 | 900.0 | 3600 | 1.3660 | 0.6054 | 0.6062 |
| 0.1662 | 950.0 | 3800 | 1.3073 | 0.6134 | 0.6136 |
| 0.1603 | 1000.0 | 4000 | 1.3719 | 0.6149 | 0.6148 |
| 0.1518 | 1050.0 | 4200 | 1.3446 | 0.6185 | 0.6185 |
| 0.1469 | 1100.0 | 4400 | 1.3893 | 0.6112 | 0.6111 |
| 0.1402 | 1150.0 | 4600 | 1.3898 | 0.6061 | 0.6062 |
| 0.1332 | 1200.0 | 4800 | 1.4544 | 0.6196 | 0.6198 |
| 0.1284 | 1250.0 | 5000 | 1.4469 | 0.6075 | 0.6074 |
| 0.125 | 1300.0 | 5200 | 1.5006 | 0.6135 | 0.6136 |
| 0.1211 | 1350.0 | 5400 | 1.4055 | 0.6099 | 0.6099 |
| 0.116 | 1400.0 | 5600 | 1.6209 | 0.6099 | 0.6099 |
| 0.1131 | 1450.0 | 5800 | 1.5094 | 0.6124 | 0.6123 |
| 0.1077 | 1500.0 | 6000 | 1.5635 | 0.6073 | 0.6074 |
| 0.1049 | 1550.0 | 6200 | 1.5939 | 0.5976 | 0.5975 |
| 0.1038 | 1600.0 | 6400 | 1.5890 | 0.6105 | 0.6111 |
| 0.0986 | 1650.0 | 6600 | 1.5193 | 0.5988 | 0.5988 |
| 0.095 | 1700.0 | 6800 | 1.6186 | 0.6041 | 0.6049 |
| 0.094 | 1750.0 | 7000 | 1.5863 | 0.6044 | 0.6049 |
| 0.092 | 1800.0 | 7200 | 1.6114 | 0.6072 | 0.6074 |
| 0.0873 | 1850.0 | 7400 | 1.5879 | 0.6042 | 0.6049 |
| 0.0858 | 1900.0 | 7600 | 1.5945 | 0.5986 | 0.5988 |
| 0.0847 | 1950.0 | 7800 | 1.6149 | 0.5998 | 0.6 |
| 0.082 | 2000.0 | 8000 | 1.6610 | 0.6034 | 0.6037 |
| 0.0807 | 2050.0 | 8200 | 1.6572 | 0.5999 | 0.6 |
| 0.0802 | 2100.0 | 8400 | 1.6178 | 0.5995 | 0.6 |
| 0.0787 | 2150.0 | 8600 | 1.7130 | 0.5998 | 0.6 |
| 0.0768 | 2200.0 | 8800 | 1.6388 | 0.5996 | 0.6 |
| 0.0761 | 2250.0 | 9000 | 1.6834 | 0.5974 | 0.5975 |
| 0.0757 | 2300.0 | 9200 | 1.6474 | 0.5984 | 0.5988 |
| 0.0746 | 2350.0 | 9400 | 1.7170 | 0.6034 | 0.6037 |
| 0.0731 | 2400.0 | 9600 | 1.6990 | 0.5996 | 0.6 |
| 0.0724 | 2450.0 | 9800 | 1.7141 | 0.5986 | 0.5988 |
| 0.0726 | 2500.0 | 10000 | 1.6941 | 0.5973 | 0.5975 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_mouse_0-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:08:16+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_mouse\_0-seqsight\_32768\_512\_30M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5036
* F1 Score: 0.5690
* Accuracy: 0.5704
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | SeeonQwQ/blip2_frame_v4.0 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:09:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4173
- F1 Score: 0.8044
- Accuracy: 0.8053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5779 | 5.56 | 200 | 0.4997 | 0.7426 | 0.7451 |
| 0.4991 | 11.11 | 400 | 0.4759 | 0.7654 | 0.7663 |
| 0.4735 | 16.67 | 600 | 0.4591 | 0.7747 | 0.7752 |
| 0.459 | 22.22 | 800 | 0.4506 | 0.7841 | 0.7850 |
| 0.4433 | 27.78 | 1000 | 0.4403 | 0.7903 | 0.7905 |
| 0.4334 | 33.33 | 1200 | 0.4344 | 0.7880 | 0.7890 |
| 0.4235 | 38.89 | 1400 | 0.4275 | 0.7956 | 0.7960 |
| 0.4151 | 44.44 | 1600 | 0.4262 | 0.7945 | 0.7957 |
| 0.4076 | 50.0 | 1800 | 0.4211 | 0.8019 | 0.8024 |
| 0.4005 | 55.56 | 2000 | 0.4179 | 0.7983 | 0.7997 |
| 0.3956 | 61.11 | 2200 | 0.4224 | 0.8009 | 0.8018 |
| 0.3908 | 66.67 | 2400 | 0.4130 | 0.8017 | 0.8024 |
| 0.3846 | 72.22 | 2600 | 0.4162 | 0.8050 | 0.8055 |
| 0.3808 | 77.78 | 2800 | 0.4128 | 0.8070 | 0.8073 |
| 0.3759 | 83.33 | 3000 | 0.4174 | 0.8020 | 0.8028 |
| 0.3739 | 88.89 | 3200 | 0.4124 | 0.8050 | 0.8052 |
| 0.3688 | 94.44 | 3400 | 0.4119 | 0.8073 | 0.8082 |
| 0.3652 | 100.0 | 3600 | 0.4088 | 0.8074 | 0.8077 |
| 0.3619 | 105.56 | 3800 | 0.4178 | 0.8075 | 0.8082 |
| 0.3584 | 111.11 | 4000 | 0.4124 | 0.8092 | 0.8099 |
| 0.355 | 116.67 | 4200 | 0.4109 | 0.8076 | 0.8080 |
| 0.3507 | 122.22 | 4400 | 0.4120 | 0.8074 | 0.8085 |
| 0.3495 | 127.78 | 4600 | 0.4179 | 0.8078 | 0.8083 |
| 0.3462 | 133.33 | 4800 | 0.4180 | 0.8071 | 0.8079 |
| 0.342 | 138.89 | 5000 | 0.4157 | 0.8074 | 0.8082 |
| 0.3411 | 144.44 | 5200 | 0.4123 | 0.8058 | 0.8064 |
| 0.3395 | 150.0 | 5400 | 0.4162 | 0.8045 | 0.8055 |
| 0.3362 | 155.56 | 5600 | 0.4223 | 0.8066 | 0.8074 |
| 0.3333 | 161.11 | 5800 | 0.4243 | 0.8011 | 0.8021 |
| 0.3323 | 166.67 | 6000 | 0.4244 | 0.8049 | 0.8058 |
| 0.3295 | 172.22 | 6200 | 0.4245 | 0.8045 | 0.8052 |
| 0.327 | 177.78 | 6400 | 0.4231 | 0.8054 | 0.8062 |
| 0.3246 | 183.33 | 6600 | 0.4274 | 0.8037 | 0.8044 |
| 0.3245 | 188.89 | 6800 | 0.4223 | 0.8047 | 0.8055 |
| 0.3219 | 194.44 | 7000 | 0.4189 | 0.8068 | 0.8073 |
| 0.32 | 200.0 | 7200 | 0.4295 | 0.8039 | 0.8050 |
| 0.3177 | 205.56 | 7400 | 0.4276 | 0.8077 | 0.8085 |
| 0.3154 | 211.11 | 7600 | 0.4316 | 0.8070 | 0.8077 |
| 0.315 | 216.67 | 7800 | 0.4265 | 0.8062 | 0.8070 |
| 0.3136 | 222.22 | 8000 | 0.4352 | 0.8064 | 0.8074 |
| 0.3123 | 227.78 | 8200 | 0.4322 | 0.8087 | 0.8093 |
| 0.3117 | 233.33 | 8400 | 0.4351 | 0.8077 | 0.8086 |
| 0.3114 | 238.89 | 8600 | 0.4329 | 0.8064 | 0.8071 |
| 0.3088 | 244.44 | 8800 | 0.4363 | 0.8061 | 0.8070 |
| 0.3107 | 250.0 | 9000 | 0.4311 | 0.8075 | 0.8080 |
| 0.3084 | 255.56 | 9200 | 0.4373 | 0.8080 | 0.8087 |
| 0.3074 | 261.11 | 9400 | 0.4339 | 0.8072 | 0.8080 |
| 0.3076 | 266.67 | 9600 | 0.4353 | 0.8076 | 0.8085 |
| 0.3051 | 272.22 | 9800 | 0.4358 | 0.8087 | 0.8095 |
| 0.306 | 277.78 | 10000 | 0.4359 | 0.8073 | 0.8082 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_mouse_1-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_1-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:13:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_mouse\_1-seqsight\_32768\_512\_30M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4173
* F1 Score: 0.8044
* Accuracy: 0.8053
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1972
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2002 | 1.0 | 1563 | 0.1453 | 0.9467 |
| 0.1226 | 2.0 | 3126 | 0.1972 | 0.9533 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "my_awesome_model", "results": []}]} | elrosech/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:13:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1972
* Accuracy: 0.9533
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers | ## Upstream model config
```json
{
"_name_or_path": "output/hermes-llama2-4k/checkpoint-2259",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.32.0.dev0",
"use_cache": false,
"vocab_size": 32000
}
```
### Dataset
```
DATASET = "abideen/Cosmopedia-100k-pretrain" # @param
from datasets import load_dataset
# converted to BitLinear
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
# A trick for implementing Straight−Through−Estimator (STE) using detach()
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
### Create the llama model with our custom config. Convert it to bitnet.
model = LlamaForCausalLM(config)
convert_to_bitnet(model, copy_weights=False)
```
### Training
```python
args = TrainingArguments(
output_dir=output_path,
per_device_train_batch_size=BATCH_SIZE,
logging_steps=100,
gradient_accumulation_steps=2,
num_train_epochs=EPOCHS,
weight_decay=0.01,
warmup_steps=0.1,
lr_scheduler_type="cosine",
learning_rate=LEARNING_RATE,
# max_steps=5000,
save_steps=0.25,
fp16=True,
report_to="wandb"
)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=args,
data_collator=data_collator,
train_dataset=tokenized_data["train"],
)
trainer.train()
```
### Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import *
# Load a pretrained BitNet model
model = "saadnaeem/Llama2-70M-Cosmopedia-100k-Pretrain"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model)
def activation_quant(x):
scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
y = (x * scale).round().clamp_(-128, 127)
y = y / scale
return y
def weight_quant(w):
scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
u = (w * scale).round().clamp_(-1, 1)
u = u / scale
return u
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
# A trick for implementing Straight−Through−Estimator (STE) using detach()
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
def convert_to_bitnet(model, copy_weights):
for name, module in model.named_modules():
# Replace linear layers with BitNet
if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
for child_name, child_module in module.named_children():
if isinstance(child_module, nn.Linear):
bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
if copy_weights:
bitlinear.weight = child_module.weight
if child_module.bias is not None:
bitlinear.bias = child_module.bias
setattr(module, child_name, bitlinear)
# Remove redundant input_layernorms
elif isinstance(module, LlamaDecoderLayer):
for child_name, child_module in module.named_children():
if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
setattr(module, child_name, nn.Identity().to(device="cuda:0"))
convert_to_bitnet(model, copy_weights=True)
model.to(device="cuda:0")
prompt = "What is Machine Learning?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
generate_ids = model.generate(inputs.input_ids, max_length=50)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
``` | {"license": "apache-2.0"} | saadnaeem/Llama2-70M-Cosmopedia-100k-Pretrained | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:15:46+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## Upstream model config
### Dataset
### Training
### Inference
| [
"## Upstream model config",
"### Dataset",
"### Training",
"### Inference"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Upstream model config",
"### Dataset",
"### Training",
"### Inference"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7020
- F1 Score: 0.5541
- Accuracy: 0.5544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6734 | 25.0 | 200 | 0.6828 | 0.5811 | 0.5810 |
| 0.6071 | 50.0 | 400 | 0.7224 | 0.5667 | 0.5704 |
| 0.5535 | 75.0 | 600 | 0.7597 | 0.5683 | 0.5682 |
| 0.5206 | 100.0 | 800 | 0.7570 | 0.5658 | 0.5661 |
| 0.5032 | 125.0 | 1000 | 0.7890 | 0.5744 | 0.5746 |
| 0.4908 | 150.0 | 1200 | 0.7986 | 0.5554 | 0.5587 |
| 0.4805 | 175.0 | 1400 | 0.8312 | 0.5636 | 0.5635 |
| 0.4725 | 200.0 | 1600 | 0.8089 | 0.5702 | 0.5709 |
| 0.4646 | 225.0 | 1800 | 0.7853 | 0.5580 | 0.5619 |
| 0.4548 | 250.0 | 2000 | 0.8389 | 0.5635 | 0.5635 |
| 0.4443 | 275.0 | 2200 | 0.8697 | 0.5664 | 0.5672 |
| 0.4335 | 300.0 | 2400 | 0.8835 | 0.5659 | 0.5672 |
| 0.4216 | 325.0 | 2600 | 0.8795 | 0.5548 | 0.5550 |
| 0.4098 | 350.0 | 2800 | 0.8803 | 0.5617 | 0.5624 |
| 0.398 | 375.0 | 3000 | 0.8796 | 0.5688 | 0.5693 |
| 0.386 | 400.0 | 3200 | 0.9467 | 0.5662 | 0.5661 |
| 0.372 | 425.0 | 3400 | 0.9330 | 0.5677 | 0.5682 |
| 0.3595 | 450.0 | 3600 | 0.9417 | 0.5625 | 0.5635 |
| 0.3436 | 475.0 | 3800 | 0.9770 | 0.5667 | 0.5666 |
| 0.3337 | 500.0 | 4000 | 0.9937 | 0.5663 | 0.5666 |
| 0.3204 | 525.0 | 4200 | 1.0209 | 0.5700 | 0.5704 |
| 0.3094 | 550.0 | 4400 | 1.0134 | 0.5757 | 0.5762 |
| 0.296 | 575.0 | 4600 | 1.0714 | 0.5758 | 0.5757 |
| 0.2862 | 600.0 | 4800 | 1.0688 | 0.5655 | 0.5656 |
| 0.2749 | 625.0 | 5000 | 1.0567 | 0.5717 | 0.5720 |
| 0.2667 | 650.0 | 5200 | 1.0925 | 0.5789 | 0.5789 |
| 0.2577 | 675.0 | 5400 | 1.0812 | 0.5678 | 0.5677 |
| 0.2481 | 700.0 | 5600 | 1.1345 | 0.5693 | 0.5693 |
| 0.2425 | 725.0 | 5800 | 1.1532 | 0.5715 | 0.5714 |
| 0.2338 | 750.0 | 6000 | 1.1763 | 0.5649 | 0.5661 |
| 0.229 | 775.0 | 6200 | 1.1709 | 0.5732 | 0.5736 |
| 0.2213 | 800.0 | 6400 | 1.1957 | 0.5731 | 0.5736 |
| 0.2158 | 825.0 | 6600 | 1.1775 | 0.5743 | 0.5746 |
| 0.2103 | 850.0 | 6800 | 1.1844 | 0.5699 | 0.5698 |
| 0.2051 | 875.0 | 7000 | 1.2155 | 0.5798 | 0.5805 |
| 0.2015 | 900.0 | 7200 | 1.2114 | 0.5798 | 0.5799 |
| 0.1979 | 925.0 | 7400 | 1.2149 | 0.5710 | 0.5709 |
| 0.1927 | 950.0 | 7600 | 1.2343 | 0.5699 | 0.5698 |
| 0.1908 | 975.0 | 7800 | 1.2661 | 0.5709 | 0.5709 |
| 0.1866 | 1000.0 | 8000 | 1.2632 | 0.5670 | 0.5672 |
| 0.1843 | 1025.0 | 8200 | 1.2791 | 0.5750 | 0.5757 |
| 0.182 | 1050.0 | 8400 | 1.2690 | 0.5705 | 0.5704 |
| 0.1789 | 1075.0 | 8600 | 1.2997 | 0.5709 | 0.5709 |
| 0.1789 | 1100.0 | 8800 | 1.2907 | 0.5667 | 0.5666 |
| 0.176 | 1125.0 | 9000 | 1.2863 | 0.5691 | 0.5693 |
| 0.1746 | 1150.0 | 9200 | 1.3073 | 0.5742 | 0.5746 |
| 0.1724 | 1175.0 | 9400 | 1.3032 | 0.5678 | 0.5677 |
| 0.1716 | 1200.0 | 9600 | 1.3097 | 0.5659 | 0.5661 |
| 0.1702 | 1225.0 | 9800 | 1.3016 | 0.5693 | 0.5693 |
| 0.1709 | 1250.0 | 10000 | 1.3014 | 0.5671 | 0.5672 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_mouse_4-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_4-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:18:08+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_mouse\_4-seqsight\_32768\_512\_30M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7020
* F1 Score: 0.5541
* Accuracy: 0.5544
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2021
- F1 Score: 0.8110
- Accuracy: 0.8110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3806 | 100.0 | 200 | 0.6077 | 0.7835 | 0.7835 |
| 0.1161 | 200.0 | 400 | 0.8438 | 0.7923 | 0.7927 |
| 0.0604 | 300.0 | 600 | 0.9929 | 0.8014 | 0.8018 |
| 0.0375 | 400.0 | 800 | 1.1156 | 0.7927 | 0.7927 |
| 0.0265 | 500.0 | 1000 | 1.1782 | 0.8079 | 0.8079 |
| 0.0199 | 600.0 | 1200 | 1.3548 | 0.8079 | 0.8079 |
| 0.0159 | 700.0 | 1400 | 1.3302 | 0.7865 | 0.7866 |
| 0.0134 | 800.0 | 1600 | 1.3239 | 0.7927 | 0.7927 |
| 0.0118 | 900.0 | 1800 | 1.4458 | 0.8018 | 0.8018 |
| 0.0098 | 1000.0 | 2000 | 1.3863 | 0.7894 | 0.7896 |
| 0.009 | 1100.0 | 2200 | 1.5925 | 0.7835 | 0.7835 |
| 0.0083 | 1200.0 | 2400 | 1.4763 | 0.7743 | 0.7744 |
| 0.0076 | 1300.0 | 2600 | 1.5739 | 0.7866 | 0.7866 |
| 0.007 | 1400.0 | 2800 | 1.6619 | 0.7835 | 0.7835 |
| 0.0069 | 1500.0 | 3000 | 1.6448 | 0.7866 | 0.7866 |
| 0.0061 | 1600.0 | 3200 | 1.5807 | 0.7896 | 0.7896 |
| 0.006 | 1700.0 | 3400 | 1.6041 | 0.7925 | 0.7927 |
| 0.0054 | 1800.0 | 3600 | 1.6627 | 0.7896 | 0.7896 |
| 0.0055 | 1900.0 | 3800 | 1.7138 | 0.7987 | 0.7988 |
| 0.0048 | 2000.0 | 4000 | 1.6279 | 0.7957 | 0.7957 |
| 0.0053 | 2100.0 | 4200 | 1.7187 | 0.7774 | 0.7774 |
| 0.0044 | 2200.0 | 4400 | 1.8112 | 0.7866 | 0.7866 |
| 0.0041 | 2300.0 | 4600 | 1.9975 | 0.7774 | 0.7774 |
| 0.0044 | 2400.0 | 4800 | 1.8580 | 0.8016 | 0.8018 |
| 0.0041 | 2500.0 | 5000 | 1.6180 | 0.7803 | 0.7805 |
| 0.0038 | 2600.0 | 5200 | 1.8182 | 0.7984 | 0.7988 |
| 0.0042 | 2700.0 | 5400 | 1.8635 | 0.7896 | 0.7896 |
| 0.0039 | 2800.0 | 5600 | 1.8840 | 0.7926 | 0.7927 |
| 0.0034 | 2900.0 | 5800 | 1.6524 | 0.7926 | 0.7927 |
| 0.0035 | 3000.0 | 6000 | 1.8053 | 0.7805 | 0.7805 |
| 0.0033 | 3100.0 | 6200 | 1.7863 | 0.7866 | 0.7866 |
| 0.0031 | 3200.0 | 6400 | 1.9512 | 0.7986 | 0.7988 |
| 0.0032 | 3300.0 | 6600 | 1.9566 | 0.7896 | 0.7896 |
| 0.003 | 3400.0 | 6800 | 1.9034 | 0.7835 | 0.7835 |
| 0.0028 | 3500.0 | 7000 | 1.9831 | 0.7860 | 0.7866 |
| 0.0031 | 3600.0 | 7200 | 1.8807 | 0.7955 | 0.7957 |
| 0.0031 | 3700.0 | 7400 | 1.9776 | 0.7926 | 0.7927 |
| 0.0026 | 3800.0 | 7600 | 2.0079 | 0.7896 | 0.7896 |
| 0.0024 | 3900.0 | 7800 | 2.0643 | 0.7926 | 0.7927 |
| 0.0029 | 4000.0 | 8000 | 1.9400 | 0.7804 | 0.7805 |
| 0.0026 | 4100.0 | 8200 | 1.9740 | 0.7865 | 0.7866 |
| 0.0026 | 4200.0 | 8400 | 1.9179 | 0.7896 | 0.7896 |
| 0.0023 | 4300.0 | 8600 | 1.9192 | 0.7896 | 0.7896 |
| 0.0023 | 4400.0 | 8800 | 2.0339 | 0.7866 | 0.7866 |
| 0.0022 | 4500.0 | 9000 | 1.9499 | 0.7835 | 0.7835 |
| 0.0022 | 4600.0 | 9200 | 1.9891 | 0.7866 | 0.7866 |
| 0.0021 | 4700.0 | 9400 | 1.9303 | 0.7835 | 0.7835 |
| 0.0022 | 4800.0 | 9600 | 1.9639 | 0.7896 | 0.7896 |
| 0.002 | 4900.0 | 9800 | 2.0178 | 0.7865 | 0.7866 |
| 0.0019 | 5000.0 | 10000 | 2.0181 | 0.7835 | 0.7835 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_mouse_2-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_2-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:19:01+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_mouse\_2-seqsight\_32768\_512\_30M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2021
* F1 Score: 0.8110
* Accuracy: 0.8110
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8005
- F1 Score: 0.6689
- Accuracy: 0.6695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4005 | 200.0 | 200 | 1.0351 | 0.6810 | 0.6820 |
| 0.0913 | 400.0 | 400 | 1.4187 | 0.6879 | 0.6904 |
| 0.0428 | 600.0 | 600 | 1.6277 | 0.7227 | 0.7238 |
| 0.0268 | 800.0 | 800 | 1.7265 | 0.7150 | 0.7155 |
| 0.0205 | 1000.0 | 1000 | 1.7873 | 0.6945 | 0.6946 |
| 0.0164 | 1200.0 | 1200 | 1.9462 | 0.6940 | 0.6946 |
| 0.0143 | 1400.0 | 1400 | 1.9181 | 0.7029 | 0.7029 |
| 0.0119 | 1600.0 | 1600 | 2.0466 | 0.7109 | 0.7113 |
| 0.0112 | 1800.0 | 1800 | 2.0393 | 0.6861 | 0.6862 |
| 0.0098 | 2000.0 | 2000 | 2.0528 | 0.6980 | 0.6987 |
| 0.0093 | 2200.0 | 2200 | 2.1318 | 0.6897 | 0.6904 |
| 0.0089 | 2400.0 | 2400 | 2.0503 | 0.7062 | 0.7071 |
| 0.008 | 2600.0 | 2600 | 2.0294 | 0.6984 | 0.6987 |
| 0.0078 | 2800.0 | 2800 | 2.1265 | 0.6862 | 0.6862 |
| 0.0072 | 3000.0 | 3000 | 2.1927 | 0.7026 | 0.7029 |
| 0.0067 | 3200.0 | 3200 | 2.2194 | 0.7021 | 0.7029 |
| 0.0067 | 3400.0 | 3400 | 2.0639 | 0.7026 | 0.7029 |
| 0.0062 | 3600.0 | 3600 | 2.3383 | 0.6983 | 0.6987 |
| 0.0056 | 3800.0 | 3800 | 2.2529 | 0.6987 | 0.6987 |
| 0.0053 | 4000.0 | 4000 | 2.2848 | 0.6978 | 0.6987 |
| 0.0053 | 4200.0 | 4200 | 2.4715 | 0.6856 | 0.6862 |
| 0.0047 | 4400.0 | 4400 | 2.3165 | 0.6883 | 0.6904 |
| 0.0049 | 4600.0 | 4600 | 2.4946 | 0.7096 | 0.7113 |
| 0.0051 | 4800.0 | 4800 | 2.3895 | 0.7021 | 0.7029 |
| 0.0045 | 5000.0 | 5000 | 2.1077 | 0.7068 | 0.7071 |
| 0.0045 | 5200.0 | 5200 | 2.3522 | 0.7068 | 0.7071 |
| 0.0044 | 5400.0 | 5400 | 2.3619 | 0.6980 | 0.6987 |
| 0.004 | 5600.0 | 5600 | 2.5004 | 0.7146 | 0.7155 |
| 0.0037 | 5800.0 | 5800 | 2.5637 | 0.7018 | 0.7029 |
| 0.0039 | 6000.0 | 6000 | 2.3137 | 0.7024 | 0.7029 |
| 0.004 | 6200.0 | 6200 | 2.4672 | 0.7021 | 0.7029 |
| 0.0036 | 6400.0 | 6400 | 2.5630 | 0.7027 | 0.7029 |
| 0.0036 | 6600.0 | 6600 | 2.3849 | 0.6971 | 0.6987 |
| 0.0035 | 6800.0 | 6800 | 2.3850 | 0.7139 | 0.7155 |
| 0.0032 | 7000.0 | 7000 | 2.5127 | 0.7068 | 0.7071 |
| 0.0029 | 7200.0 | 7200 | 2.5741 | 0.6904 | 0.6904 |
| 0.0031 | 7400.0 | 7400 | 2.4346 | 0.6945 | 0.6946 |
| 0.0027 | 7600.0 | 7600 | 2.4071 | 0.7109 | 0.7113 |
| 0.0029 | 7800.0 | 7800 | 2.4664 | 0.7021 | 0.7029 |
| 0.0028 | 8000.0 | 8000 | 2.7324 | 0.7096 | 0.7113 |
| 0.0028 | 8200.0 | 8200 | 2.3488 | 0.7107 | 0.7113 |
| 0.0027 | 8400.0 | 8400 | 2.5021 | 0.7047 | 0.7071 |
| 0.0026 | 8600.0 | 8600 | 2.5843 | 0.6943 | 0.6946 |
| 0.0027 | 8800.0 | 8800 | 2.4919 | 0.7052 | 0.7071 |
| 0.0025 | 9000.0 | 9000 | 2.5269 | 0.7064 | 0.7071 |
| 0.0025 | 9200.0 | 9200 | 2.4955 | 0.7068 | 0.7071 |
| 0.0023 | 9400.0 | 9400 | 2.6224 | 0.7021 | 0.7029 |
| 0.0024 | 9600.0 | 9600 | 2.4806 | 0.6984 | 0.6987 |
| 0.0025 | 9800.0 | 9800 | 2.4590 | 0.7109 | 0.7113 |
| 0.0023 | 10000.0 | 10000 | 2.4738 | 0.7150 | 0.7155 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_mouse_3-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_3-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:19:12+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_mouse\_3-seqsight\_32768\_512\_30M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8005
* F1 Score: 0.6689
* Accuracy: 0.6695
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | relu-ntnu/bart-large-xsum_v2_trained_on_250 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:21:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # WizardChatML 7B v0
[GGUF Quants](https://huggingface.co/mrfakename/WizardChatML-7B-v0-Q4_K_M-GGUF)
I personally think ChatML is the best prompt format. It allows:
1. Easier templating for generation
2. Lower risk of inadvertently generating role tokens
3. Better long-context performance and higher quality on quantized models
This model is an experiment attempting to extend WizardLM 2 7B to ChatML. It was trained on a small ChatML dataset, it probably isn't as good as WizardLM 2 Base, but it's an attempt.
Aside from using the ChatML prompt format, this model supports system prompts. In fact, it adheres *very* well to these prompts.
If you want to use this model for task-specific purposes, you should probably fine-tune it.
## Capabilities & Challenges
* Seems ok-ish at writing
* Pretty good at math
* Sometimes calls itself ChatGPT/OpenAI
## Risks
It has not been trained on guardrail data and may generate offensive content if prompted.
## License
If you use this model, you must include the Apache 2.0 license AND the following notice:
I'm releasing this model under the Apache 2.0 license, with the added restriction that it cannot be used to compete with OpenAI (due to the nature of the training data). Additionally, this model was finetuned from the WizardLM 2 7B model, which was recently removed by Microsoft (it was Apache licensed, but may have been trained on NC-licensed data). You are responsible for the usage of this model. You are responsible for checking that your usage of this model is legal in your jurisdiction. Commercial use is not advised, as this model is finetuned from a model that may have been trained on NC-licensed data. Make sure to consult a lawyer before using in production or commercially. | {"language": ["en"], "license": "other", "license_name": "apache-2.0-mostly", "pipeline_tag": "text-generation"} | mrfakename/WizardChatML-7B-v0 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:22:13+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # WizardChatML 7B v0
GGUF Quants
I personally think ChatML is the best prompt format. It allows:
1. Easier templating for generation
2. Lower risk of inadvertently generating role tokens
3. Better long-context performance and higher quality on quantized models
This model is an experiment attempting to extend WizardLM 2 7B to ChatML. It was trained on a small ChatML dataset, it probably isn't as good as WizardLM 2 Base, but it's an attempt.
Aside from using the ChatML prompt format, this model supports system prompts. In fact, it adheres *very* well to these prompts.
If you want to use this model for task-specific purposes, you should probably fine-tune it.
## Capabilities & Challenges
* Seems ok-ish at writing
* Pretty good at math
* Sometimes calls itself ChatGPT/OpenAI
## Risks
It has not been trained on guardrail data and may generate offensive content if prompted.
## License
If you use this model, you must include the Apache 2.0 license AND the following notice:
I'm releasing this model under the Apache 2.0 license, with the added restriction that it cannot be used to compete with OpenAI (due to the nature of the training data). Additionally, this model was finetuned from the WizardLM 2 7B model, which was recently removed by Microsoft (it was Apache licensed, but may have been trained on NC-licensed data). You are responsible for the usage of this model. You are responsible for checking that your usage of this model is legal in your jurisdiction. Commercial use is not advised, as this model is finetuned from a model that may have been trained on NC-licensed data. Make sure to consult a lawyer before using in production or commercially. | [
"# WizardChatML 7B v0\n\nGGUF Quants\n\nI personally think ChatML is the best prompt format. It allows:\n\n1. Easier templating for generation\n2. Lower risk of inadvertently generating role tokens\n3. Better long-context performance and higher quality on quantized models\n\nThis model is an experiment attempting to extend WizardLM 2 7B to ChatML. It was trained on a small ChatML dataset, it probably isn't as good as WizardLM 2 Base, but it's an attempt.\n\nAside from using the ChatML prompt format, this model supports system prompts. In fact, it adheres *very* well to these prompts.\n\nIf you want to use this model for task-specific purposes, you should probably fine-tune it.",
"## Capabilities & Challenges\n\n* Seems ok-ish at writing\n* Pretty good at math\n* Sometimes calls itself ChatGPT/OpenAI",
"## Risks\n\nIt has not been trained on guardrail data and may generate offensive content if prompted.",
"## License\n\nIf you use this model, you must include the Apache 2.0 license AND the following notice:\n\nI'm releasing this model under the Apache 2.0 license, with the added restriction that it cannot be used to compete with OpenAI (due to the nature of the training data). Additionally, this model was finetuned from the WizardLM 2 7B model, which was recently removed by Microsoft (it was Apache licensed, but may have been trained on NC-licensed data). You are responsible for the usage of this model. You are responsible for checking that your usage of this model is legal in your jurisdiction. Commercial use is not advised, as this model is finetuned from a model that may have been trained on NC-licensed data. Make sure to consult a lawyer before using in production or commercially."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# WizardChatML 7B v0\n\nGGUF Quants\n\nI personally think ChatML is the best prompt format. It allows:\n\n1. Easier templating for generation\n2. Lower risk of inadvertently generating role tokens\n3. Better long-context performance and higher quality on quantized models\n\nThis model is an experiment attempting to extend WizardLM 2 7B to ChatML. It was trained on a small ChatML dataset, it probably isn't as good as WizardLM 2 Base, but it's an attempt.\n\nAside from using the ChatML prompt format, this model supports system prompts. In fact, it adheres *very* well to these prompts.\n\nIf you want to use this model for task-specific purposes, you should probably fine-tune it.",
"## Capabilities & Challenges\n\n* Seems ok-ish at writing\n* Pretty good at math\n* Sometimes calls itself ChatGPT/OpenAI",
"## Risks\n\nIt has not been trained on guardrail data and may generate offensive content if prompted.",
"## License\n\nIf you use this model, you must include the Apache 2.0 license AND the following notice:\n\nI'm releasing this model under the Apache 2.0 license, with the added restriction that it cannot be used to compete with OpenAI (due to the nature of the training data). Additionally, this model was finetuned from the WizardLM 2 7B model, which was recently removed by Microsoft (it was Apache licensed, but may have been trained on NC-licensed data). You are responsible for the usage of this model. You are responsible for checking that your usage of this model is legal in your jurisdiction. Commercial use is not advised, as this model is finetuned from a model that may have been trained on NC-licensed data. Make sure to consult a lawyer before using in production or commercially."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_spectral_r8_3e5_e3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_spectral_r8_3e5_e3", "results": []}]} | fangzhaoz/mistralv1_spectral_r8_3e5_e3 | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T21:22:44+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# mistralv1_spectral_r8_3e5_e3
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# mistralv1_spectral_r8_3e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# mistralv1_spectral_r8_3e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | fangzhaoz/mistralv1_spectral_r8_3e5_e3_merged | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:27:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | sample-factory |
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r DiegoT200/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| {"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "11.37 +/- 6.23", "name": "mean_reward", "verified": false}]}]}]} | DiegoT200/rl_course_vizdoom_health_gathering_supreme | null | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-16T21:27:43+00:00 | [] | [] | TAGS
#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
A(n) APPO model trained on the doom_health_gathering_supreme environment.
This model was trained using Sample-Factory 2.0: URL
Documentation for how to use Sample-Factory can be found at URL
## Downloading the model
After installing Sample-Factory, download the model with:
## Using the model
To run the model after download, use the 'enjoy' script corresponding to this environment:
You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.
See URL for more details
## Training with this model
To continue training with this model, use the 'train' script corresponding to this environment:
Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] | [
"TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8351
- F1 Score: 0.6604
- Accuracy: 0.6675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.9504 | 11.11 | 200 | 0.8887 | 0.5649 | 0.6129 |
| 0.8605 | 22.22 | 400 | 0.8410 | 0.5882 | 0.6300 |
| 0.8157 | 33.33 | 600 | 0.8227 | 0.6094 | 0.6298 |
| 0.7814 | 44.44 | 800 | 0.8101 | 0.6195 | 0.6361 |
| 0.7545 | 55.56 | 1000 | 0.8019 | 0.6298 | 0.6374 |
| 0.7327 | 66.67 | 1200 | 0.7938 | 0.6354 | 0.6495 |
| 0.7169 | 77.78 | 1400 | 0.7938 | 0.6375 | 0.6534 |
| 0.7024 | 88.89 | 1600 | 0.7928 | 0.6375 | 0.6460 |
| 0.6913 | 100.0 | 1800 | 0.7954 | 0.6379 | 0.6436 |
| 0.6836 | 111.11 | 2000 | 0.8003 | 0.6403 | 0.6559 |
| 0.6746 | 122.22 | 2200 | 0.7941 | 0.6398 | 0.6488 |
| 0.6672 | 133.33 | 2400 | 0.7926 | 0.6446 | 0.6548 |
| 0.6623 | 144.44 | 2600 | 0.7969 | 0.6407 | 0.6510 |
| 0.6573 | 155.56 | 2800 | 0.7970 | 0.6448 | 0.6512 |
| 0.6508 | 166.67 | 3000 | 0.7975 | 0.6433 | 0.6521 |
| 0.6469 | 177.78 | 3200 | 0.7953 | 0.6446 | 0.6532 |
| 0.6405 | 188.89 | 3400 | 0.7882 | 0.6458 | 0.6556 |
| 0.635 | 200.0 | 3600 | 0.7955 | 0.6467 | 0.6618 |
| 0.6298 | 211.11 | 3800 | 0.7936 | 0.6475 | 0.6585 |
| 0.6235 | 222.22 | 4000 | 0.7882 | 0.6463 | 0.6550 |
| 0.6185 | 233.33 | 4200 | 0.7957 | 0.6500 | 0.6600 |
| 0.6145 | 244.44 | 4400 | 0.8050 | 0.6530 | 0.6646 |
| 0.6071 | 255.56 | 4600 | 0.8003 | 0.6498 | 0.6611 |
| 0.6013 | 266.67 | 4800 | 0.7966 | 0.6546 | 0.6648 |
| 0.5983 | 277.78 | 5000 | 0.8117 | 0.6512 | 0.6646 |
| 0.5914 | 288.89 | 5200 | 0.8083 | 0.6543 | 0.6637 |
| 0.5859 | 300.0 | 5400 | 0.8134 | 0.6527 | 0.6644 |
| 0.5813 | 311.11 | 5600 | 0.8123 | 0.6541 | 0.6616 |
| 0.5766 | 322.22 | 5800 | 0.8093 | 0.6582 | 0.6659 |
| 0.5706 | 333.33 | 6000 | 0.8188 | 0.6540 | 0.6616 |
| 0.5669 | 344.44 | 6200 | 0.8151 | 0.6535 | 0.6657 |
| 0.5623 | 355.56 | 6400 | 0.8199 | 0.6565 | 0.6686 |
| 0.5563 | 366.67 | 6600 | 0.8247 | 0.6536 | 0.6637 |
| 0.5528 | 377.78 | 6800 | 0.8184 | 0.6568 | 0.6644 |
| 0.5493 | 388.89 | 7000 | 0.8289 | 0.6553 | 0.6673 |
| 0.5447 | 400.0 | 7200 | 0.8170 | 0.6571 | 0.6673 |
| 0.5408 | 411.11 | 7400 | 0.8219 | 0.6585 | 0.6657 |
| 0.5385 | 422.22 | 7600 | 0.8269 | 0.6582 | 0.6668 |
| 0.5343 | 433.33 | 7800 | 0.8339 | 0.6581 | 0.6675 |
| 0.5326 | 444.44 | 8000 | 0.8352 | 0.6580 | 0.6653 |
| 0.5284 | 455.56 | 8200 | 0.8306 | 0.6635 | 0.6694 |
| 0.527 | 466.67 | 8400 | 0.8328 | 0.6601 | 0.6688 |
| 0.5243 | 477.78 | 8600 | 0.8376 | 0.6633 | 0.6705 |
| 0.5221 | 488.89 | 8800 | 0.8348 | 0.6631 | 0.6705 |
| 0.5203 | 500.0 | 9000 | 0.8364 | 0.6622 | 0.6703 |
| 0.5198 | 511.11 | 9200 | 0.8373 | 0.6605 | 0.6686 |
| 0.519 | 522.22 | 9400 | 0.8361 | 0.6610 | 0.6692 |
| 0.5164 | 533.33 | 9600 | 0.8378 | 0.6602 | 0.6675 |
| 0.5165 | 544.44 | 9800 | 0.8363 | 0.6632 | 0.6710 |
| 0.517 | 555.56 | 10000 | 0.8373 | 0.6635 | 0.6719 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:33:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_splice\_reconstructed-seqsight\_32768\_512\_30M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8351
* F1 Score: 0.6604
* Accuracy: 0.6675
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kuleshov-group/caduceus-ps_seqlen-1k_d_model-118_n_layer-4_lr-8e-3 | null | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T21:33:00+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yairschiff/caduceus-ps_seqlen-1k_d_model-118_n_layer-4_lr-8e-3 | null | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T21:33:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kuleshov-group/caduceus-ph_seqlen-1k_d_model-118_n_layer-4_lr-8e-3 | null | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T21:33:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | yairschiff/caduceus-ph_seqlen-1k_d_model-118_n_layer-4_lr-8e-3 | null | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T21:33:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-githubCybersecurity
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2084 | 1.0 | 601 | 2.9314 |
| 2.9457 | 2.0 | 1202 | 2.8003 |
| 2.8274 | 3.0 | 1803 | 2.6980 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.0.post100
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilroberta-base", "model-index": [{"name": "distilroberta-base-finetuned-githubCybersecurity", "results": []}]} | chihan0425/distilroberta-base-finetuned-githubCybersecurity | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:34:04+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilroberta-base-finetuned-githubCybersecurity
================================================
This model is a fine-tuned version of distilroberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7557
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.2.0.post100
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.0.post100\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.0.post100\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kuleshov-group/caduceus-ps_seqlen-1k_d_model-256_n_layer-4_lr-8e-3 | null | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T21:35:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | kuleshov-group/caduceus-ph_seqlen-1k_d_model-256_n_layer-4_lr-8e-3 | null | [
"transformers",
"safetensors",
"caduceus",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | null | 2024-04-16T21:35:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #caduceus #fill-mask #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
**Model date:**
LLaVA-v1.6-Mistral-7B was trained in December 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. | {"license": "apache-2.0", "inference": false} | jeiku/llavamistral1.6configedit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:36:12+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us
|
<br>
<br>
# LLaVA Model Card
## Model details
Model type:
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: mistralai/Mistral-7B-Instruct-v0.2
Model date:
LLaVA-v1.6-Mistral-7B was trained in December 2023.
Paper or resources for more information:
URL
## License
mistralai/Mistral-7B-Instruct-v0.2 license.
Where to send questions or comments about the model:
URL
## Intended use
Primary intended uses:
The primary use of LLaVA is research on large multimodal models and chatbots.
Primary intended users:
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. | [
"# LLaVA Model Card",
"## Model details\n\nModel type:\nLLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.\nIt is an auto-regressive language model, based on the transformer architecture.\nBase LLM: mistralai/Mistral-7B-Instruct-v0.2\n\nModel date:\nLLaVA-v1.6-Mistral-7B was trained in December 2023.\n\nPaper or resources for more information:\nURL",
"## License\nmistralai/Mistral-7B-Instruct-v0.2 license.\n\nWhere to send questions or comments about the model:\nURL",
"## Intended use\nPrimary intended uses:\nThe primary use of LLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## Training dataset\n- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.\n- 158K GPT-generated multimodal instruction-following data.\n- 500K academic-task-oriented VQA data mixture.\n- 50K GPT-4V data mixture.\n- 40K ShareGPT data.",
"## Evaluation dataset\nA collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #text-generation-inference #region-us \n",
"# LLaVA Model Card",
"## Model details\n\nModel type:\nLLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.\nIt is an auto-regressive language model, based on the transformer architecture.\nBase LLM: mistralai/Mistral-7B-Instruct-v0.2\n\nModel date:\nLLaVA-v1.6-Mistral-7B was trained in December 2023.\n\nPaper or resources for more information:\nURL",
"## License\nmistralai/Mistral-7B-Instruct-v0.2 license.\n\nWhere to send questions or comments about the model:\nURL",
"## Intended use\nPrimary intended uses:\nThe primary use of LLaVA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.",
"## Training dataset\n- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.\n- 158K GPT-generated multimodal instruction-following data.\n- 500K academic-task-oriented VQA data mixture.\n- 50K GPT-4V data mixture.\n- 40K ShareGPT data.",
"## Evaluation dataset\nA collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_usp3_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9971
- Rewards/chosen: -22.6484
- Rewards/rejected: -28.5100
- Rewards/accuracies: 0.6200
- Rewards/margins: 5.8617
- Logps/rejected: -144.5019
- Logps/chosen: -138.1667
- Logits/rejected: -0.4573
- Logits/chosen: -0.4284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0667 | 2.67 | 100 | 1.2931 | -0.0486 | -2.3792 | 0.6500 | 2.3307 | -115.4677 | -113.0558 | -0.0497 | -0.0386 |
| 0.0265 | 5.33 | 200 | 2.5238 | -3.3105 | -7.5646 | 0.6600 | 4.2541 | -121.2292 | -116.6801 | -0.3923 | -0.3765 |
| 0.139 | 8.0 | 300 | 4.4570 | -13.8321 | -19.1751 | 0.6100 | 5.3430 | -134.1298 | -128.3709 | -0.2657 | -0.2456 |
| 0.0061 | 10.67 | 400 | 4.9964 | -19.0684 | -25.0784 | 0.6300 | 6.0099 | -140.6890 | -134.1890 | -0.4660 | -0.4443 |
| 0.0 | 13.33 | 500 | 5.0051 | -22.7007 | -28.5148 | 0.6100 | 5.8141 | -144.5073 | -138.2248 | -0.4580 | -0.4287 |
| 0.0 | 16.0 | 600 | 4.9951 | -22.7131 | -28.5252 | 0.6000 | 5.8121 | -144.5188 | -138.2386 | -0.4569 | -0.4278 |
| 0.0 | 18.67 | 700 | 4.9801 | -22.6913 | -28.5241 | 0.6200 | 5.8329 | -144.5176 | -138.2144 | -0.4571 | -0.4278 |
| 0.0 | 21.33 | 800 | 4.9915 | -22.6547 | -28.5091 | 0.6000 | 5.8544 | -144.5009 | -138.1738 | -0.4569 | -0.4278 |
| 0.0 | 24.0 | 900 | 4.9990 | -22.6732 | -28.5298 | 0.6200 | 5.8566 | -144.5239 | -138.1943 | -0.4568 | -0.4277 |
| 0.0 | 26.67 | 1000 | 4.9971 | -22.6484 | -28.5100 | 0.6200 | 5.8617 | -144.5019 | -138.1667 | -0.4573 | -0.4284 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp3_dpo9", "results": []}]} | guoyu-zhang/model_hh_usp3_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T21:37:04+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_usp3\_dpo9
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.9971
* Rewards/chosen: -22.6484
* Rewards/rejected: -28.5100
* Rewards/accuracies: 0.6200
* Rewards/margins: 5.8617
* Logps/rejected: -144.5019
* Logps/chosen: -138.1667
* Logits/rejected: -0.4573
* Logits/chosen: -0.4284
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5930
- F1 Score: 0.7192
- Accuracy: 0.721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6448 | 12.5 | 200 | 0.6122 | 0.6659 | 0.666 |
| 0.5841 | 25.0 | 400 | 0.6074 | 0.6639 | 0.664 |
| 0.5533 | 37.5 | 600 | 0.5837 | 0.6779 | 0.678 |
| 0.5309 | 50.0 | 800 | 0.5860 | 0.7049 | 0.707 |
| 0.5179 | 62.5 | 1000 | 0.5824 | 0.6985 | 0.699 |
| 0.51 | 75.0 | 1200 | 0.5789 | 0.7160 | 0.716 |
| 0.5018 | 87.5 | 1400 | 0.5842 | 0.7060 | 0.706 |
| 0.4971 | 100.0 | 1600 | 0.5766 | 0.7142 | 0.715 |
| 0.4911 | 112.5 | 1800 | 0.5825 | 0.7123 | 0.713 |
| 0.4866 | 125.0 | 2000 | 0.5807 | 0.7188 | 0.719 |
| 0.4807 | 137.5 | 2200 | 0.5708 | 0.7172 | 0.718 |
| 0.4762 | 150.0 | 2400 | 0.5975 | 0.7103 | 0.711 |
| 0.4708 | 162.5 | 2600 | 0.5914 | 0.7081 | 0.708 |
| 0.4662 | 175.0 | 2800 | 0.5896 | 0.7235 | 0.724 |
| 0.4597 | 187.5 | 3000 | 0.5873 | 0.7107 | 0.711 |
| 0.4553 | 200.0 | 3200 | 0.6002 | 0.7150 | 0.715 |
| 0.4508 | 212.5 | 3400 | 0.5962 | 0.7138 | 0.714 |
| 0.4453 | 225.0 | 3600 | 0.5966 | 0.7207 | 0.721 |
| 0.4401 | 237.5 | 3800 | 0.6107 | 0.7210 | 0.721 |
| 0.4354 | 250.0 | 4000 | 0.6074 | 0.7191 | 0.719 |
| 0.431 | 262.5 | 4200 | 0.6075 | 0.7218 | 0.722 |
| 0.4272 | 275.0 | 4400 | 0.6113 | 0.7250 | 0.725 |
| 0.4216 | 287.5 | 4600 | 0.6090 | 0.7248 | 0.725 |
| 0.4175 | 300.0 | 4800 | 0.6172 | 0.7171 | 0.717 |
| 0.4134 | 312.5 | 5000 | 0.6283 | 0.7151 | 0.715 |
| 0.4104 | 325.0 | 5200 | 0.6111 | 0.7184 | 0.719 |
| 0.4056 | 337.5 | 5400 | 0.6145 | 0.7151 | 0.715 |
| 0.4023 | 350.0 | 5600 | 0.6193 | 0.7130 | 0.713 |
| 0.3984 | 362.5 | 5800 | 0.6209 | 0.7205 | 0.721 |
| 0.3937 | 375.0 | 6000 | 0.6214 | 0.7169 | 0.717 |
| 0.3905 | 387.5 | 6200 | 0.6363 | 0.7150 | 0.715 |
| 0.3863 | 400.0 | 6400 | 0.6370 | 0.7180 | 0.718 |
| 0.3825 | 412.5 | 6600 | 0.6322 | 0.7119 | 0.712 |
| 0.3822 | 425.0 | 6800 | 0.6335 | 0.714 | 0.714 |
| 0.3782 | 437.5 | 7000 | 0.6439 | 0.7140 | 0.714 |
| 0.3739 | 450.0 | 7200 | 0.6308 | 0.7149 | 0.716 |
| 0.3725 | 462.5 | 7400 | 0.6436 | 0.7080 | 0.708 |
| 0.3692 | 475.0 | 7600 | 0.6332 | 0.7090 | 0.709 |
| 0.3666 | 487.5 | 7800 | 0.6433 | 0.7090 | 0.709 |
| 0.3646 | 500.0 | 8000 | 0.6488 | 0.7061 | 0.706 |
| 0.3635 | 512.5 | 8200 | 0.6414 | 0.7040 | 0.704 |
| 0.3611 | 525.0 | 8400 | 0.6492 | 0.7020 | 0.702 |
| 0.3582 | 537.5 | 8600 | 0.6508 | 0.7121 | 0.712 |
| 0.3585 | 550.0 | 8800 | 0.6505 | 0.7120 | 0.712 |
| 0.3565 | 562.5 | 9000 | 0.6525 | 0.7121 | 0.712 |
| 0.3556 | 575.0 | 9200 | 0.6534 | 0.7101 | 0.71 |
| 0.3538 | 587.5 | 9400 | 0.6588 | 0.7091 | 0.709 |
| 0.3541 | 600.0 | 9600 | 0.6504 | 0.7120 | 0.712 |
| 0.3522 | 612.5 | 9800 | 0.6549 | 0.7140 | 0.714 |
| 0.3523 | 625.0 | 10000 | 0.6566 | 0.7121 | 0.712 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_tf_0-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_0-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:38:40+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_tf\_0-seqsight\_32768\_512\_30M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5930
* F1 Score: 0.7192
* Accuracy: 0.721
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["trl", "sft"]} | lilyray/falcon_7b_emo_motiv_tomi | null | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:38:44+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | # aranea-tenebris-120b-v1.0-gguf
**aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B**
Model merge for uncensored creative writing and rp

A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [Netrve/Miqu-PlayMaid-70B-v0.1](https://huggingface.co/Netrve/Miqu-PlayMaid-70B-v0.1) with interleaved layers of [ShinojiResearch/Senku-70B](https://huggingface.co/ShinojiResearch/Senku-70B).
This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model.
Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges.
A number of different base models, interleave models and layer offsets were compared.
This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts.
- Usable context: ~32768
- Recommended prompt format: Alpaca
- Layers: 137
### Quantization
llama.cpp [imatrix.dat](./imatrix.dat)
Will upload a few quants when bandwidth permits.
### Testing
Two different writing styles were considered for each testing scenario:
- Completions for 3rd person narration. No character role was assumed.
- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.
Tests assumed a mature audience, but a range of scenarios were constructed.
Thematic inconsistancy or bias in character behaviour was penalized heavily.
Models showing the following were penalized during manual comparison:
- Consistently short responses.
- Laziness or readily gave up on solving a character problem.
- Overly malleable, where characters could not hold opinions or beliefs.
- Passiveness or an inability to drive the narrative.
- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.
- Ignoring or missing obvious scenario solutions.
- Impersonating other major characters out of turn during rp tests.
- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.
- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.
### Interesting observations from benchmarking
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.
- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.
- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.
- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "mergekit", "merge"], "base_model": ["Netrve/Miqu-PlayMaid-70B-v0.1", "ShinojiResearch/Senku-70B"]} | divinetaco/aranea-tenebris-120b-v1.0-gguf | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"mergekit",
"merge",
"base_model:Netrve/Miqu-PlayMaid-70B-v0.1",
"base_model:ShinojiResearch/Senku-70B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:39:00+00:00 | [] | [] | TAGS
#transformers #gguf #not-for-all-audiences #nsfw #mergekit #merge #base_model-Netrve/Miqu-PlayMaid-70B-v0.1 #base_model-ShinojiResearch/Senku-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| # aranea-tenebris-120b-v1.0-gguf
aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B
Model merge for uncensored creative writing and rp
!image/png
A mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B.
This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model.
Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges.
A number of different base models, interleave models and layer offsets were compared.
This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts.
- Usable context: ~32768
- Recommended prompt format: Alpaca
- Layers: 137
### Quantization
URL URL
Will upload a few quants when bandwidth permits.
### Testing
Two different writing styles were considered for each testing scenario:
- Completions for 3rd person narration. No character role was assumed.
- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.
Tests assumed a mature audience, but a range of scenarios were constructed.
Thematic inconsistancy or bias in character behaviour was penalized heavily.
Models showing the following were penalized during manual comparison:
- Consistently short responses.
- Laziness or readily gave up on solving a character problem.
- Overly malleable, where characters could not hold opinions or beliefs.
- Passiveness or an inability to drive the narrative.
- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.
- Ignoring or missing obvious scenario solutions.
- Impersonating other major characters out of turn during rp tests.
- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.
- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.
### Interesting observations from benchmarking
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.
- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.
- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.
- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
| [
"# aranea-tenebris-120b-v1.0-gguf\naka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B \nModel merge for uncensored creative writing and rp \n\n!image/png\n\nA mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. \nThis was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. \nTests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. \n \nA number of different base models, interleave models and layer offsets were compared. \nThis model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. \n\n- Usable context: ~32768\n- Recommended prompt format: Alpaca\n- Layers: 137",
"### Quantization\n \nURL URL \n \nWill upload a few quants when bandwidth permits.",
"### Testing\n\nTwo different writing styles were considered for each testing scenario:\n- Completions for 3rd person narration. No character role was assumed.\n- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.\n\nTests assumed a mature audience, but a range of scenarios were constructed. \nThematic inconsistancy or bias in character behaviour was penalized heavily. \n\nModels showing the following were penalized during manual comparison:\n- Consistently short responses.\n- Laziness or readily gave up on solving a character problem.\n- Overly malleable, where characters could not hold opinions or beliefs.\n- Passiveness or an inability to drive the narrative.\n- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.\n- Ignoring or missing obvious scenario solutions.\n- Impersonating other major characters out of turn during rp tests.\n- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.\n- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.",
"### Interesting observations from benchmarking\n\n- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.\n- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.\n- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.\n- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements."
] | [
"TAGS\n#transformers #gguf #not-for-all-audiences #nsfw #mergekit #merge #base_model-Netrve/Miqu-PlayMaid-70B-v0.1 #base_model-ShinojiResearch/Senku-70B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# aranea-tenebris-120b-v1.0-gguf\naka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B \nModel merge for uncensored creative writing and rp \n\n!image/png\n\nA mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. \nThis was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. \nTests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. \n \nA number of different base models, interleave models and layer offsets were compared. \nThis model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. \n\n- Usable context: ~32768\n- Recommended prompt format: Alpaca\n- Layers: 137",
"### Quantization\n \nURL URL \n \nWill upload a few quants when bandwidth permits.",
"### Testing\n\nTwo different writing styles were considered for each testing scenario:\n- Completions for 3rd person narration. No character role was assumed.\n- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.\n\nTests assumed a mature audience, but a range of scenarios were constructed. \nThematic inconsistancy or bias in character behaviour was penalized heavily. \n\nModels showing the following were penalized during manual comparison:\n- Consistently short responses.\n- Laziness or readily gave up on solving a character problem.\n- Overly malleable, where characters could not hold opinions or beliefs.\n- Passiveness or an inability to drive the narrative.\n- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.\n- Ignoring or missing obvious scenario solutions.\n- Impersonating other major characters out of turn during rp tests.\n- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.\n- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.",
"### Interesting observations from benchmarking\n\n- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.\n- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.\n- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.\n- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-addition
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-1_5", "model-index": [{"name": "phi-1_5-finetuned-addition", "results": []}]} | Antonilyin/phi-1_5-finetuned-addition | null | [
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-04-16T21:39:05+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-1_5 #license-mit #region-us
|
# phi-1_5-finetuned-addition
This model is a fine-tuned version of microsoft/phi-1_5 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# phi-1_5-finetuned-addition\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #phi #generated_from_trainer #custom_code #base_model-microsoft/phi-1_5 #license-mit #region-us \n",
"# phi-1_5-finetuned-addition\n\nThis model is a fine-tuned version of microsoft/phi-1_5 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers | # aranea-tenebris-120b-v1.0-exl2
**aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B**
Model merge for uncensored creative writing and rp

A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [Netrve/Miqu-PlayMaid-70B-v0.1](https://huggingface.co/Netrve/Miqu-PlayMaid-70B-v0.1) with interleaved layers of [ShinojiResearch/Senku-70B](https://huggingface.co/ShinojiResearch/Senku-70B).
This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model.
Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges.
A number of different base models, interleave models and layer offsets were compared.
This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts.
- Usable context: ~32768
- Recommended prompt format: Alpaca
- Layers: 137
### Quantization
llama.cpp [imatrix.dat](./imatrix.dat)
Will upload a few quants when bandwidth permits.
### Testing
Two different writing styles were considered for each testing scenario:
- Completions for 3rd person narration. No character role was assumed.
- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.
Tests assumed a mature audience, but a range of scenarios were constructed.
Thematic inconsistancy or bias in character behaviour was penalized heavily.
Models showing the following were penalized during manual comparison:
- Consistently short responses.
- Laziness or readily gave up on solving a character problem.
- Overly malleable, where characters could not hold opinions or beliefs.
- Passiveness or an inability to drive the narrative.
- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.
- Ignoring or missing obvious scenario solutions.
- Impersonating other major characters out of turn during rp tests.
- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.
- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.
### Interesting observations from benchmarking
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.
- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.
- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.
- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "mergekit", "merge"], "base_model": ["Netrve/Miqu-PlayMaid-70B-v0.1", "ShinojiResearch/Senku-70B"]} | divinetaco/aranea-tenebris-120b-v1.0-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"mergekit",
"merge",
"conversational",
"base_model:Netrve/Miqu-PlayMaid-70B-v0.1",
"base_model:ShinojiResearch/Senku-70B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:39:27+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #mergekit #merge #conversational #base_model-Netrve/Miqu-PlayMaid-70B-v0.1 #base_model-ShinojiResearch/Senku-70B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # aranea-tenebris-120b-v1.0-exl2
aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B
Model merge for uncensored creative writing and rp
!image/png
A mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B.
This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model.
Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges.
A number of different base models, interleave models and layer offsets were compared.
This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts.
- Usable context: ~32768
- Recommended prompt format: Alpaca
- Layers: 137
### Quantization
URL URL
Will upload a few quants when bandwidth permits.
### Testing
Two different writing styles were considered for each testing scenario:
- Completions for 3rd person narration. No character role was assumed.
- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.
Tests assumed a mature audience, but a range of scenarios were constructed.
Thematic inconsistancy or bias in character behaviour was penalized heavily.
Models showing the following were penalized during manual comparison:
- Consistently short responses.
- Laziness or readily gave up on solving a character problem.
- Overly malleable, where characters could not hold opinions or beliefs.
- Passiveness or an inability to drive the narrative.
- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.
- Ignoring or missing obvious scenario solutions.
- Impersonating other major characters out of turn during rp tests.
- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.
- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.
### Interesting observations from benchmarking
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.
- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.
- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.
- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
| [
"# aranea-tenebris-120b-v1.0-exl2\naka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B \nModel merge for uncensored creative writing and rp \n\n!image/png\n\nA mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. \nThis was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. \nTests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. \n \nA number of different base models, interleave models and layer offsets were compared. \nThis model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. \n\n- Usable context: ~32768\n- Recommended prompt format: Alpaca\n- Layers: 137",
"### Quantization\n \nURL URL \n \nWill upload a few quants when bandwidth permits.",
"### Testing\n\nTwo different writing styles were considered for each testing scenario:\n- Completions for 3rd person narration. No character role was assumed.\n- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.\n\nTests assumed a mature audience, but a range of scenarios were constructed. \nThematic inconsistancy or bias in character behaviour was penalized heavily. \n\nModels showing the following were penalized during manual comparison:\n- Consistently short responses.\n- Laziness or readily gave up on solving a character problem.\n- Overly malleable, where characters could not hold opinions or beliefs.\n- Passiveness or an inability to drive the narrative.\n- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.\n- Ignoring or missing obvious scenario solutions.\n- Impersonating other major characters out of turn during rp tests.\n- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.\n- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.",
"### Interesting observations from benchmarking\n\n- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.\n- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.\n- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.\n- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #nsfw #mergekit #merge #conversational #base_model-Netrve/Miqu-PlayMaid-70B-v0.1 #base_model-ShinojiResearch/Senku-70B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# aranea-tenebris-120b-v1.0-exl2\naka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B \nModel merge for uncensored creative writing and rp \n\n!image/png\n\nA mergekit frankenmerge based on Netrve/Miqu-PlayMaid-70B-v0.1 with interleaved layers of ShinojiResearch/Senku-70B. \nThis was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model. \nTests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges. \n \nA number of different base models, interleave models and layer offsets were compared. \nThis model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts. \n\n- Usable context: ~32768\n- Recommended prompt format: Alpaca\n- Layers: 137",
"### Quantization\n \nURL URL \n \nWill upload a few quants when bandwidth permits.",
"### Testing\n\nTwo different writing styles were considered for each testing scenario:\n- Completions for 3rd person narration. No character role was assumed.\n- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.\n\nTests assumed a mature audience, but a range of scenarios were constructed. \nThematic inconsistancy or bias in character behaviour was penalized heavily. \n\nModels showing the following were penalized during manual comparison:\n- Consistently short responses.\n- Laziness or readily gave up on solving a character problem.\n- Overly malleable, where characters could not hold opinions or beliefs.\n- Passiveness or an inability to drive the narrative.\n- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.\n- Ignoring or missing obvious scenario solutions.\n- Impersonating other major characters out of turn during rp tests.\n- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.\n- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.",
"### Interesting observations from benchmarking\n\n- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.\n- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.\n- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.\n- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | piercemaloney/llemma-7b-v3-finetuned-partial | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:42:10+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Kenito21/Modelo-Kenito | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:42:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | savinda99/distilbert-base-uncased-finetuned-lgbt-classification | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:43:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5199
- F1 Score: 0.7450
- Accuracy: 0.746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6491 | 13.33 | 200 | 0.6254 | 0.6353 | 0.636 |
| 0.5891 | 26.67 | 400 | 0.6321 | 0.6408 | 0.641 |
| 0.5595 | 40.0 | 600 | 0.6264 | 0.6514 | 0.652 |
| 0.535 | 53.33 | 800 | 0.6305 | 0.6683 | 0.671 |
| 0.5196 | 66.67 | 1000 | 0.6419 | 0.6659 | 0.666 |
| 0.5082 | 80.0 | 1200 | 0.6426 | 0.6620 | 0.662 |
| 0.4995 | 93.33 | 1400 | 0.6327 | 0.6727 | 0.673 |
| 0.4932 | 106.67 | 1600 | 0.6513 | 0.6678 | 0.668 |
| 0.4887 | 120.0 | 1800 | 0.6348 | 0.67 | 0.67 |
| 0.4823 | 133.33 | 2000 | 0.6279 | 0.6700 | 0.67 |
| 0.4752 | 146.67 | 2200 | 0.6490 | 0.6678 | 0.668 |
| 0.4703 | 160.0 | 2400 | 0.6813 | 0.6680 | 0.668 |
| 0.4619 | 173.33 | 2600 | 0.6676 | 0.6668 | 0.667 |
| 0.4549 | 186.67 | 2800 | 0.6485 | 0.6609 | 0.661 |
| 0.4496 | 200.0 | 3000 | 0.6474 | 0.6635 | 0.664 |
| 0.4427 | 213.33 | 3200 | 0.6514 | 0.6641 | 0.665 |
| 0.4364 | 226.67 | 3400 | 0.6791 | 0.6688 | 0.669 |
| 0.431 | 240.0 | 3600 | 0.6953 | 0.6668 | 0.667 |
| 0.4252 | 253.33 | 3800 | 0.6744 | 0.6686 | 0.67 |
| 0.4178 | 266.67 | 4000 | 0.6978 | 0.6674 | 0.668 |
| 0.4126 | 280.0 | 4200 | 0.6945 | 0.66 | 0.66 |
| 0.4061 | 293.33 | 4400 | 0.6935 | 0.6665 | 0.667 |
| 0.4008 | 306.67 | 4600 | 0.6900 | 0.6628 | 0.663 |
| 0.3957 | 320.0 | 4800 | 0.7086 | 0.6625 | 0.663 |
| 0.3919 | 333.33 | 5000 | 0.7301 | 0.6649 | 0.665 |
| 0.3857 | 346.67 | 5200 | 0.7005 | 0.6621 | 0.663 |
| 0.3815 | 360.0 | 5400 | 0.7084 | 0.6630 | 0.663 |
| 0.3766 | 373.33 | 5600 | 0.7422 | 0.6717 | 0.672 |
| 0.3726 | 386.67 | 5800 | 0.7488 | 0.6554 | 0.656 |
| 0.3689 | 400.0 | 6000 | 0.7312 | 0.6514 | 0.652 |
| 0.3658 | 413.33 | 6200 | 0.7335 | 0.6637 | 0.665 |
| 0.3609 | 426.67 | 6400 | 0.7294 | 0.6620 | 0.663 |
| 0.357 | 440.0 | 6600 | 0.7460 | 0.6596 | 0.66 |
| 0.3544 | 453.33 | 6800 | 0.7399 | 0.6575 | 0.658 |
| 0.3497 | 466.67 | 7000 | 0.7415 | 0.6647 | 0.665 |
| 0.3476 | 480.0 | 7200 | 0.7581 | 0.6549 | 0.655 |
| 0.3447 | 493.33 | 7400 | 0.7662 | 0.6588 | 0.659 |
| 0.3425 | 506.67 | 7600 | 0.7546 | 0.6605 | 0.661 |
| 0.3408 | 520.0 | 7800 | 0.7346 | 0.6629 | 0.663 |
| 0.3375 | 533.33 | 8000 | 0.7678 | 0.6678 | 0.668 |
| 0.3358 | 546.67 | 8200 | 0.7555 | 0.6557 | 0.656 |
| 0.3335 | 560.0 | 8400 | 0.7655 | 0.6547 | 0.655 |
| 0.3309 | 573.33 | 8600 | 0.7586 | 0.6628 | 0.663 |
| 0.3287 | 586.67 | 8800 | 0.7724 | 0.6538 | 0.654 |
| 0.3289 | 600.0 | 9000 | 0.7581 | 0.6666 | 0.667 |
| 0.3262 | 613.33 | 9200 | 0.7693 | 0.6579 | 0.658 |
| 0.3251 | 626.67 | 9400 | 0.7732 | 0.6636 | 0.664 |
| 0.3259 | 640.0 | 9600 | 0.7647 | 0.6579 | 0.658 |
| 0.3239 | 653.33 | 9800 | 0.7720 | 0.6619 | 0.662 |
| 0.3236 | 666.67 | 10000 | 0.7713 | 0.6638 | 0.664 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_tf_1-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_1-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:44:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_tf\_1-seqsight\_32768\_512\_30M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5199
* F1 Score: 0.7450
* Accuracy: 0.746
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9241
- F1 Score: 0.6684
- Accuracy: 0.669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6323 | 15.38 | 200 | 0.6339 | 0.6341 | 0.64 |
| 0.5444 | 30.77 | 400 | 0.5885 | 0.6880 | 0.688 |
| 0.4859 | 46.15 | 600 | 0.5586 | 0.7265 | 0.728 |
| 0.4491 | 61.54 | 800 | 0.5537 | 0.7348 | 0.735 |
| 0.4238 | 76.92 | 1000 | 0.5430 | 0.7427 | 0.744 |
| 0.4092 | 92.31 | 1200 | 0.5407 | 0.7600 | 0.761 |
| 0.3975 | 107.69 | 1400 | 0.5337 | 0.7600 | 0.76 |
| 0.388 | 123.08 | 1600 | 0.5496 | 0.7584 | 0.76 |
| 0.378 | 138.46 | 1800 | 0.5454 | 0.7646 | 0.767 |
| 0.3711 | 153.85 | 2000 | 0.5334 | 0.7752 | 0.776 |
| 0.3611 | 169.23 | 2200 | 0.5476 | 0.7691 | 0.77 |
| 0.352 | 184.62 | 2400 | 0.5454 | 0.7772 | 0.778 |
| 0.3436 | 200.0 | 2600 | 0.5534 | 0.7775 | 0.779 |
| 0.3352 | 215.38 | 2800 | 0.5377 | 0.7813 | 0.782 |
| 0.3253 | 230.77 | 3000 | 0.5661 | 0.7733 | 0.776 |
| 0.3165 | 246.15 | 3200 | 0.5606 | 0.7745 | 0.775 |
| 0.3051 | 261.54 | 3400 | 0.5619 | 0.7893 | 0.79 |
| 0.2955 | 276.92 | 3600 | 0.5619 | 0.7892 | 0.79 |
| 0.2879 | 292.31 | 3800 | 0.5846 | 0.7820 | 0.784 |
| 0.2802 | 307.69 | 4000 | 0.5829 | 0.7880 | 0.789 |
| 0.2714 | 323.08 | 4200 | 0.5928 | 0.7890 | 0.791 |
| 0.2609 | 338.46 | 4400 | 0.5953 | 0.7968 | 0.798 |
| 0.255 | 353.85 | 4600 | 0.6108 | 0.7835 | 0.786 |
| 0.2471 | 369.23 | 4800 | 0.6199 | 0.7874 | 0.79 |
| 0.2399 | 384.62 | 5000 | 0.6116 | 0.7888 | 0.791 |
| 0.2336 | 400.0 | 5200 | 0.6136 | 0.7983 | 0.8 |
| 0.2291 | 415.38 | 5400 | 0.6145 | 0.7882 | 0.791 |
| 0.2229 | 430.77 | 5600 | 0.6300 | 0.7985 | 0.8 |
| 0.2173 | 446.15 | 5800 | 0.6238 | 0.8012 | 0.803 |
| 0.2114 | 461.54 | 6000 | 0.6390 | 0.8038 | 0.805 |
| 0.2068 | 476.92 | 6200 | 0.6586 | 0.7875 | 0.79 |
| 0.2033 | 492.31 | 6400 | 0.6370 | 0.8004 | 0.802 |
| 0.1976 | 507.69 | 6600 | 0.6731 | 0.7950 | 0.797 |
| 0.1929 | 523.08 | 6800 | 0.6679 | 0.7963 | 0.798 |
| 0.1912 | 538.46 | 7000 | 0.6668 | 0.7920 | 0.794 |
| 0.188 | 553.85 | 7200 | 0.6578 | 0.7922 | 0.794 |
| 0.185 | 569.23 | 7400 | 0.6490 | 0.7962 | 0.798 |
| 0.1831 | 584.62 | 7600 | 0.6412 | 0.7978 | 0.799 |
| 0.1813 | 600.0 | 7800 | 0.6514 | 0.7940 | 0.796 |
| 0.1759 | 615.38 | 8000 | 0.6567 | 0.7898 | 0.792 |
| 0.1742 | 630.77 | 8200 | 0.6646 | 0.7921 | 0.794 |
| 0.1726 | 646.15 | 8400 | 0.6769 | 0.7848 | 0.787 |
| 0.1723 | 661.54 | 8600 | 0.6724 | 0.7892 | 0.791 |
| 0.1682 | 676.92 | 8800 | 0.6574 | 0.7995 | 0.801 |
| 0.1684 | 692.31 | 9000 | 0.6701 | 0.7899 | 0.792 |
| 0.1656 | 707.69 | 9200 | 0.6815 | 0.7909 | 0.793 |
| 0.1658 | 723.08 | 9400 | 0.6809 | 0.7941 | 0.796 |
| 0.1645 | 738.46 | 9600 | 0.6888 | 0.7910 | 0.793 |
| 0.1649 | 753.85 | 9800 | 0.6757 | 0.7921 | 0.794 |
| 0.1632 | 769.23 | 10000 | 0.6786 | 0.7911 | 0.793 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_tf_4-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_4-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:44:45+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_tf\_4-seqsight\_32768\_512\_30M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9241
* F1 Score: 0.6684
* Accuracy: 0.669
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** Local
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "Local"} | LeroyDyer/Mixtral_AI_Cyber_Child | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Local",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T21:49:34+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-Local #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : Local
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : Local\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-Local #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : Local\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6909
- F1 Score: 0.5888
- Accuracy: 0.59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6772 | 14.29 | 200 | 0.6523 | 0.5672 | 0.574 |
| 0.6412 | 28.57 | 400 | 0.6511 | 0.5917 | 0.605 |
| 0.6193 | 42.86 | 600 | 0.6510 | 0.6294 | 0.631 |
| 0.5965 | 57.14 | 800 | 0.6661 | 0.6371 | 0.637 |
| 0.5758 | 71.43 | 1000 | 0.6597 | 0.6494 | 0.65 |
| 0.5632 | 85.71 | 1200 | 0.6848 | 0.6275 | 0.628 |
| 0.5528 | 100.0 | 1400 | 0.6565 | 0.6559 | 0.656 |
| 0.5448 | 114.29 | 1600 | 0.6771 | 0.6505 | 0.651 |
| 0.5378 | 128.57 | 1800 | 0.6707 | 0.6494 | 0.65 |
| 0.5308 | 142.86 | 2000 | 0.6829 | 0.6316 | 0.632 |
| 0.524 | 157.14 | 2200 | 0.6688 | 0.6429 | 0.643 |
| 0.5151 | 171.43 | 2400 | 0.6791 | 0.6290 | 0.629 |
| 0.507 | 185.71 | 2600 | 0.6934 | 0.6357 | 0.636 |
| 0.4983 | 200.0 | 2800 | 0.7020 | 0.6221 | 0.623 |
| 0.4916 | 214.29 | 3000 | 0.7114 | 0.6188 | 0.619 |
| 0.4827 | 228.57 | 3200 | 0.7239 | 0.6201 | 0.62 |
| 0.4765 | 242.86 | 3400 | 0.7164 | 0.6063 | 0.608 |
| 0.4679 | 257.14 | 3600 | 0.7423 | 0.6201 | 0.62 |
| 0.4596 | 271.43 | 3800 | 0.7504 | 0.6181 | 0.618 |
| 0.4545 | 285.71 | 4000 | 0.7476 | 0.6189 | 0.619 |
| 0.4473 | 300.0 | 4200 | 0.7687 | 0.6110 | 0.611 |
| 0.4397 | 314.29 | 4400 | 0.7595 | 0.6081 | 0.609 |
| 0.4343 | 328.57 | 4600 | 0.7779 | 0.6142 | 0.616 |
| 0.4289 | 342.86 | 4800 | 0.7835 | 0.6256 | 0.626 |
| 0.4257 | 357.14 | 5000 | 0.7790 | 0.6180 | 0.618 |
| 0.4181 | 371.43 | 5200 | 0.8001 | 0.6131 | 0.613 |
| 0.4117 | 385.71 | 5400 | 0.8078 | 0.6097 | 0.61 |
| 0.4075 | 400.0 | 5600 | 0.8022 | 0.6100 | 0.61 |
| 0.4004 | 414.29 | 5800 | 0.8149 | 0.6220 | 0.622 |
| 0.3971 | 428.57 | 6000 | 0.8281 | 0.6118 | 0.612 |
| 0.3945 | 442.86 | 6200 | 0.8442 | 0.6106 | 0.611 |
| 0.3884 | 457.14 | 6400 | 0.8294 | 0.6238 | 0.624 |
| 0.382 | 471.43 | 6600 | 0.8366 | 0.6152 | 0.616 |
| 0.3795 | 485.71 | 6800 | 0.8344 | 0.6190 | 0.619 |
| 0.3751 | 500.0 | 7000 | 0.8582 | 0.6101 | 0.61 |
| 0.3723 | 514.29 | 7200 | 0.8425 | 0.6211 | 0.621 |
| 0.3695 | 528.57 | 7400 | 0.8633 | 0.6161 | 0.616 |
| 0.3651 | 542.86 | 7600 | 0.8598 | 0.6121 | 0.612 |
| 0.3618 | 557.14 | 7800 | 0.8578 | 0.6188 | 0.619 |
| 0.3599 | 571.43 | 8000 | 0.8555 | 0.6131 | 0.613 |
| 0.3567 | 585.71 | 8200 | 0.8682 | 0.6101 | 0.61 |
| 0.3543 | 600.0 | 8400 | 0.8616 | 0.6115 | 0.612 |
| 0.3538 | 614.29 | 8600 | 0.8731 | 0.6161 | 0.616 |
| 0.351 | 628.57 | 8800 | 0.8709 | 0.6159 | 0.616 |
| 0.349 | 642.86 | 9000 | 0.8819 | 0.6061 | 0.606 |
| 0.3472 | 657.14 | 9200 | 0.8737 | 0.6111 | 0.611 |
| 0.3467 | 671.43 | 9400 | 0.8780 | 0.6130 | 0.613 |
| 0.3439 | 685.71 | 9600 | 0.8817 | 0.6071 | 0.607 |
| 0.3438 | 700.0 | 9800 | 0.8804 | 0.6091 | 0.609 |
| 0.3438 | 714.29 | 10000 | 0.8800 | 0.6111 | 0.611 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_tf_3-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:50:12+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_tf\_3-seqsight\_32768\_512\_30M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6909
* F1 Score: 0.5888
* Accuracy: 0.59
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- F1 Score: 0.6799
- Accuracy: 0.683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6461 | 20.0 | 200 | 0.6580 | 0.6000 | 0.603 |
| 0.5734 | 40.0 | 400 | 0.6662 | 0.6180 | 0.618 |
| 0.532 | 60.0 | 600 | 0.6778 | 0.6247 | 0.625 |
| 0.5025 | 80.0 | 800 | 0.6898 | 0.6430 | 0.643 |
| 0.4838 | 100.0 | 1000 | 0.6792 | 0.6393 | 0.64 |
| 0.4746 | 120.0 | 1200 | 0.7229 | 0.6368 | 0.637 |
| 0.4645 | 140.0 | 1400 | 0.7044 | 0.6399 | 0.64 |
| 0.457 | 160.0 | 1600 | 0.7273 | 0.6256 | 0.626 |
| 0.45 | 180.0 | 1800 | 0.7324 | 0.6275 | 0.628 |
| 0.4418 | 200.0 | 2000 | 0.7164 | 0.6370 | 0.637 |
| 0.4331 | 220.0 | 2200 | 0.7270 | 0.6356 | 0.636 |
| 0.4267 | 240.0 | 2400 | 0.7376 | 0.6407 | 0.641 |
| 0.4152 | 260.0 | 2600 | 0.7623 | 0.6481 | 0.65 |
| 0.4069 | 280.0 | 2800 | 0.7552 | 0.6386 | 0.639 |
| 0.3966 | 300.0 | 3000 | 0.7456 | 0.6418 | 0.642 |
| 0.3864 | 320.0 | 3200 | 0.7708 | 0.6420 | 0.642 |
| 0.3763 | 340.0 | 3400 | 0.7715 | 0.6399 | 0.64 |
| 0.3666 | 360.0 | 3600 | 0.8455 | 0.6318 | 0.632 |
| 0.3579 | 380.0 | 3800 | 0.8078 | 0.6270 | 0.627 |
| 0.3486 | 400.0 | 4000 | 0.7841 | 0.6303 | 0.631 |
| 0.3398 | 420.0 | 4200 | 0.8332 | 0.6350 | 0.635 |
| 0.3327 | 440.0 | 4400 | 0.8236 | 0.6330 | 0.633 |
| 0.3255 | 460.0 | 4600 | 0.8601 | 0.6248 | 0.625 |
| 0.3191 | 480.0 | 4800 | 0.8533 | 0.6265 | 0.627 |
| 0.3109 | 500.0 | 5000 | 0.8650 | 0.6236 | 0.624 |
| 0.3038 | 520.0 | 5200 | 0.9033 | 0.6159 | 0.616 |
| 0.2997 | 540.0 | 5400 | 0.8836 | 0.6130 | 0.613 |
| 0.2926 | 560.0 | 5600 | 0.9222 | 0.6187 | 0.619 |
| 0.2848 | 580.0 | 5800 | 0.9135 | 0.6260 | 0.626 |
| 0.2818 | 600.0 | 6000 | 0.8921 | 0.6150 | 0.615 |
| 0.275 | 620.0 | 6200 | 0.9310 | 0.6089 | 0.609 |
| 0.2704 | 640.0 | 6400 | 0.9353 | 0.6119 | 0.612 |
| 0.2665 | 660.0 | 6600 | 0.9298 | 0.6049 | 0.605 |
| 0.2612 | 680.0 | 6800 | 0.9411 | 0.6079 | 0.608 |
| 0.2594 | 700.0 | 7000 | 0.9288 | 0.608 | 0.608 |
| 0.254 | 720.0 | 7200 | 0.9737 | 0.6170 | 0.617 |
| 0.2501 | 740.0 | 7400 | 0.9630 | 0.6160 | 0.616 |
| 0.2467 | 760.0 | 7600 | 0.9797 | 0.6100 | 0.61 |
| 0.2427 | 780.0 | 7800 | 0.9849 | 0.6140 | 0.614 |
| 0.2414 | 800.0 | 8000 | 0.9795 | 0.6080 | 0.608 |
| 0.2372 | 820.0 | 8200 | 0.9775 | 0.6120 | 0.612 |
| 0.2376 | 840.0 | 8400 | 0.9855 | 0.6110 | 0.611 |
| 0.2332 | 860.0 | 8600 | 1.0065 | 0.6150 | 0.615 |
| 0.2317 | 880.0 | 8800 | 0.9963 | 0.6220 | 0.622 |
| 0.2301 | 900.0 | 9000 | 1.0006 | 0.6090 | 0.609 |
| 0.2277 | 920.0 | 9200 | 1.0101 | 0.6130 | 0.613 |
| 0.2257 | 940.0 | 9400 | 1.0137 | 0.6110 | 0.611 |
| 0.2269 | 960.0 | 9600 | 1.0014 | 0.6150 | 0.615 |
| 0.2251 | 980.0 | 9800 | 1.0114 | 0.6140 | 0.614 |
| 0.2251 | 1000.0 | 10000 | 1.0150 | 0.6130 | 0.613 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_tf_2-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_tf_2-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:51:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_tf\_2-seqsight\_32768\_512\_30M-L32\_all
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6919
* F1 Score: 0.6799
* Accuracy: 0.683
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1957
- Bleu: 0.2317
- Gen Len: 18.1863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.6463 | 1.0 | 1617 | 3.2789 | 0.1795 | 18.1985 |
| 3.5103 | 2.0 | 3234 | 3.1957 | 0.2317 | 18.1863 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | mkim-MASI/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T21:52:58+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1957
* Bleu: 0.2317
* Gen Len: 18.1863
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_virus_covid-seqsight_32768_512_30M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_30M) on the [mahdibaghbanzadeh/GUE_virus_covid](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_virus_covid) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9224
- F1 Score: 0.6518
- Accuracy: 0.6529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 2.1465 | 5.56 | 200 | 1.9710 | 0.2336 | 0.2539 |
| 1.8465 | 11.11 | 400 | 1.6176 | 0.3931 | 0.3956 |
| 1.6375 | 16.67 | 600 | 1.4616 | 0.4611 | 0.4602 |
| 1.5277 | 22.22 | 800 | 1.3575 | 0.5075 | 0.5050 |
| 1.436 | 27.78 | 1000 | 1.2653 | 0.5334 | 0.5332 |
| 1.3647 | 33.33 | 1200 | 1.2054 | 0.5519 | 0.5511 |
| 1.3149 | 38.89 | 1400 | 1.1689 | 0.5637 | 0.5643 |
| 1.2743 | 44.44 | 1600 | 1.1405 | 0.5772 | 0.5769 |
| 1.24 | 50.0 | 1800 | 1.1159 | 0.5791 | 0.5818 |
| 1.2097 | 55.56 | 2000 | 1.0947 | 0.5880 | 0.5897 |
| 1.1859 | 61.11 | 2200 | 1.0729 | 0.5979 | 0.5996 |
| 1.1605 | 66.67 | 2400 | 1.0559 | 0.6013 | 0.6017 |
| 1.1382 | 72.22 | 2600 | 1.0389 | 0.6059 | 0.6072 |
| 1.1185 | 77.78 | 2800 | 1.0256 | 0.6153 | 0.6150 |
| 1.1004 | 83.33 | 3000 | 1.0134 | 0.6165 | 0.6168 |
| 1.0844 | 88.89 | 3200 | 1.0026 | 0.6206 | 0.6227 |
| 1.0708 | 94.44 | 3400 | 0.9975 | 0.6246 | 0.6259 |
| 1.057 | 100.0 | 3600 | 0.9878 | 0.6274 | 0.6280 |
| 1.0453 | 105.56 | 3800 | 0.9782 | 0.6327 | 0.6331 |
| 1.031 | 111.11 | 4000 | 0.9719 | 0.6337 | 0.6347 |
| 1.0218 | 116.67 | 4200 | 0.9658 | 0.6375 | 0.6378 |
| 1.0101 | 122.22 | 4400 | 0.9569 | 0.6384 | 0.6391 |
| 0.9999 | 127.78 | 4600 | 0.9482 | 0.6421 | 0.6422 |
| 0.9854 | 133.33 | 4800 | 0.9376 | 0.6443 | 0.6458 |
| 0.9755 | 138.89 | 5000 | 0.9321 | 0.6494 | 0.6497 |
| 0.9672 | 144.44 | 5200 | 0.9305 | 0.6487 | 0.6499 |
| 0.9604 | 150.0 | 5400 | 0.9269 | 0.6515 | 0.6512 |
| 0.9537 | 155.56 | 5600 | 0.9231 | 0.6504 | 0.6507 |
| 0.9489 | 161.11 | 5800 | 0.9206 | 0.6534 | 0.6533 |
| 0.9427 | 166.67 | 6000 | 0.9185 | 0.6562 | 0.6563 |
| 0.9365 | 172.22 | 6200 | 0.9173 | 0.6547 | 0.6550 |
| 0.9344 | 177.78 | 6400 | 0.9154 | 0.6530 | 0.6533 |
| 0.9282 | 183.33 | 6600 | 0.9135 | 0.6542 | 0.6543 |
| 0.9257 | 188.89 | 6800 | 0.9119 | 0.6557 | 0.6561 |
| 0.9209 | 194.44 | 7000 | 0.9101 | 0.6575 | 0.6571 |
| 0.9194 | 200.0 | 7200 | 0.9087 | 0.6572 | 0.6578 |
| 0.9154 | 205.56 | 7400 | 0.9110 | 0.6592 | 0.6587 |
| 0.9123 | 211.11 | 7600 | 0.9090 | 0.6547 | 0.6557 |
| 0.9101 | 216.67 | 7800 | 0.9061 | 0.6581 | 0.6583 |
| 0.9081 | 222.22 | 8000 | 0.9076 | 0.6572 | 0.6568 |
| 0.905 | 227.78 | 8200 | 0.9068 | 0.6569 | 0.6573 |
| 0.9032 | 233.33 | 8400 | 0.9065 | 0.6569 | 0.6575 |
| 0.9021 | 238.89 | 8600 | 0.9054 | 0.6595 | 0.6597 |
| 0.9007 | 244.44 | 8800 | 0.9046 | 0.6578 | 0.6581 |
| 0.8988 | 250.0 | 9000 | 0.9051 | 0.6577 | 0.6583 |
| 0.8974 | 255.56 | 9200 | 0.9032 | 0.6574 | 0.6578 |
| 0.8964 | 261.11 | 9400 | 0.9041 | 0.6578 | 0.6580 |
| 0.8956 | 266.67 | 9600 | 0.9040 | 0.6576 | 0.6581 |
| 0.8942 | 272.22 | 9800 | 0.9040 | 0.6583 | 0.6585 |
| 0.8938 | 277.78 | 10000 | 0.9039 | 0.6584 | 0.6587 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_30M", "model-index": [{"name": "GUE_virus_covid-seqsight_32768_512_30M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_virus_covid-seqsight_32768_512_30M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_30M",
"region:us"
] | null | 2024-04-16T21:58:50+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us
| GUE\_virus\_covid-seqsight\_32768\_512\_30M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_30M on the mahdibaghbanzadeh/GUE\_virus\_covid dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9224
* F1 Score: 0.6518
* Accuracy: 0.6529
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_shp2_dpo5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1695
- Rewards/chosen: -3.4447
- Rewards/rejected: -4.4007
- Rewards/accuracies: 0.5400
- Rewards/margins: 0.9560
- Logps/rejected: -245.2697
- Logps/chosen: -255.8932
- Logits/rejected: -0.8140
- Logits/chosen: -0.7999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0371 | 2.67 | 100 | 1.6792 | -0.5920 | -0.6679 | 0.4600 | 0.0759 | -237.8042 | -250.1878 | -0.5568 | -0.5337 |
| 0.0071 | 5.33 | 200 | 1.8321 | 0.5323 | 0.0212 | 0.5500 | 0.5112 | -236.4261 | -247.9392 | -0.7524 | -0.7416 |
| 0.0001 | 8.0 | 300 | 2.1597 | -2.9208 | -3.8736 | 0.5400 | 0.9528 | -244.2156 | -254.8454 | -0.8222 | -0.8092 |
| 0.0 | 10.67 | 400 | 2.1619 | -3.3914 | -4.3596 | 0.5300 | 0.9682 | -245.1877 | -255.7867 | -0.8137 | -0.8004 |
| 0.0 | 13.33 | 500 | 2.1533 | -3.3951 | -4.3755 | 0.5300 | 0.9804 | -245.2194 | -255.7940 | -0.8134 | -0.7998 |
| 0.0 | 16.0 | 600 | 2.1833 | -3.4274 | -4.3755 | 0.5300 | 0.9480 | -245.2194 | -255.8587 | -0.8142 | -0.8001 |
| 0.0 | 18.67 | 700 | 2.1523 | -3.4138 | -4.3824 | 0.5300 | 0.9686 | -245.2332 | -255.8314 | -0.8134 | -0.7991 |
| 0.0 | 21.33 | 800 | 2.1568 | -3.4182 | -4.3819 | 0.5300 | 0.9637 | -245.2321 | -255.8403 | -0.8134 | -0.7993 |
| 0.0 | 24.0 | 900 | 2.1621 | -3.4517 | -4.3884 | 0.5300 | 0.9367 | -245.2452 | -255.9073 | -0.8143 | -0.8002 |
| 0.0 | 26.67 | 1000 | 2.1695 | -3.4447 | -4.4007 | 0.5400 | 0.9560 | -245.2697 | -255.8932 | -0.8140 | -0.7999 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_shp2_dpo5", "results": []}]} | guoyu-zhang/model_shp2_dpo5 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T21:59:37+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_shp2\_dpo5
=================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1695
* Rewards/chosen: -3.4447
* Rewards/rejected: -4.4007
* Rewards/accuracies: 0.5400
* Rewards/margins: 0.9560
* Logps/rejected: -245.2697
* Logps/chosen: -255.8932
* Logits/rejected: -0.8140
* Logits/chosen: -0.7999
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cackerman/rewrites_gemma7_4bit_ft_full_big2 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:01:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Trubnik1967/rugpt3small_based_on_gpt2_v4 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:03:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hippoleveque/my-first-model | null | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:04:02+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #camembert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# TS-Corpus WordPiece Tokenizer (32k, Uncased)
## Overview
This repository contains a WordPiece tokenizer with a vocabulary size of 32,000, trained uncased on various datasets from the TS Corpus website. It is designed to handle Turkish text, leveraging rich and diverse sources to provide a robust tool for natural language processing tasks.
## Dataset Sources
The tokenizer was trained using multiple corpora from the TS Corpus, specifically:
- [TS Corpus V2](https://tscorpus.com/corpora/ts-corpus-v2/)
- [TS Wikipedia Corpus](https://tscorpus.com/corpora/ts-wikipedia-corpus/)
- [TS Abstract Corpus](https://tscorpus.com/corpora/ts-abstract-corpus/)
- [TS Idioms and Proverbs Corpus](https://tscorpus.com/corpora/ts-idioms-and-proverbs-corpus/)
- [Syllable Corpus](https://tscorpus.com/corpora/syllable-corpus/)
- [Turkish Constitution Corpus](https://tscorpus.com/corpora/turkish-constitution-corpus/)
These diverse sources include a wide range of texts from encyclopedic articles to legal documents, providing a comprehensive linguistic foundation for the tokenizer.
## Tokenizer Model
The tokenizer uses the WordPiece model, which is widely utilized in many modern NLP systems. It is particularly effective in handling languages with rich morphology like Turkish due to its subword segmentation approach. This tokenizer does not differentiate between uppercase and lowercase letters, ensuring uniformity in tokenization regardless of text casing.
## Usage
To use this tokenizer, you can load it via the Hugging Face `transformers` library as follows:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tahaenesaslanturk/ts-corpus-wordpiece-32k-uncased")
```
| {"language": ["tr"], "license": "mit", "library_name": "transformers"} | tahaenesaslanturk/ts-corpus-wordpiece-32k-uncased | null | [
"transformers",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:06:23+00:00 | [] | [
"tr"
] | TAGS
#transformers #tr #license-mit #endpoints_compatible #region-us
|
# TS-Corpus WordPiece Tokenizer (32k, Uncased)
## Overview
This repository contains a WordPiece tokenizer with a vocabulary size of 32,000, trained uncased on various datasets from the TS Corpus website. It is designed to handle Turkish text, leveraging rich and diverse sources to provide a robust tool for natural language processing tasks.
## Dataset Sources
The tokenizer was trained using multiple corpora from the TS Corpus, specifically:
- TS Corpus V2
- TS Wikipedia Corpus
- TS Abstract Corpus
- TS Idioms and Proverbs Corpus
- Syllable Corpus
- Turkish Constitution Corpus
These diverse sources include a wide range of texts from encyclopedic articles to legal documents, providing a comprehensive linguistic foundation for the tokenizer.
## Tokenizer Model
The tokenizer uses the WordPiece model, which is widely utilized in many modern NLP systems. It is particularly effective in handling languages with rich morphology like Turkish due to its subword segmentation approach. This tokenizer does not differentiate between uppercase and lowercase letters, ensuring uniformity in tokenization regardless of text casing.
## Usage
To use this tokenizer, you can load it via the Hugging Face 'transformers' library as follows:
| [
"# TS-Corpus WordPiece Tokenizer (32k, Uncased)",
"## Overview\nThis repository contains a WordPiece tokenizer with a vocabulary size of 32,000, trained uncased on various datasets from the TS Corpus website. It is designed to handle Turkish text, leveraging rich and diverse sources to provide a robust tool for natural language processing tasks.",
"## Dataset Sources\nThe tokenizer was trained using multiple corpora from the TS Corpus, specifically:\n- TS Corpus V2\n- TS Wikipedia Corpus\n- TS Abstract Corpus\n- TS Idioms and Proverbs Corpus\n- Syllable Corpus\n- Turkish Constitution Corpus\n\nThese diverse sources include a wide range of texts from encyclopedic articles to legal documents, providing a comprehensive linguistic foundation for the tokenizer.",
"## Tokenizer Model\nThe tokenizer uses the WordPiece model, which is widely utilized in many modern NLP systems. It is particularly effective in handling languages with rich morphology like Turkish due to its subword segmentation approach. This tokenizer does not differentiate between uppercase and lowercase letters, ensuring uniformity in tokenization regardless of text casing.",
"## Usage\nTo use this tokenizer, you can load it via the Hugging Face 'transformers' library as follows:"
] | [
"TAGS\n#transformers #tr #license-mit #endpoints_compatible #region-us \n",
"# TS-Corpus WordPiece Tokenizer (32k, Uncased)",
"## Overview\nThis repository contains a WordPiece tokenizer with a vocabulary size of 32,000, trained uncased on various datasets from the TS Corpus website. It is designed to handle Turkish text, leveraging rich and diverse sources to provide a robust tool for natural language processing tasks.",
"## Dataset Sources\nThe tokenizer was trained using multiple corpora from the TS Corpus, specifically:\n- TS Corpus V2\n- TS Wikipedia Corpus\n- TS Abstract Corpus\n- TS Idioms and Proverbs Corpus\n- Syllable Corpus\n- Turkish Constitution Corpus\n\nThese diverse sources include a wide range of texts from encyclopedic articles to legal documents, providing a comprehensive linguistic foundation for the tokenizer.",
"## Tokenizer Model\nThe tokenizer uses the WordPiece model, which is widely utilized in many modern NLP systems. It is particularly effective in handling languages with rich morphology like Turkish due to its subword segmentation approach. This tokenizer does not differentiate between uppercase and lowercase letters, ensuring uniformity in tokenization regardless of text casing.",
"## Usage\nTo use this tokenizer, you can load it via the Hugging Face 'transformers' library as follows:"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1801
- Bleu: 0.2081
- Gen Len: 18.156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.644 | 1.0 | 1617 | 3.2637 | 0.1784 | 18.1684 |
| 3.5194 | 2.0 | 3234 | 3.1801 | 0.2081 | 18.156 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | willw9758/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:07:01+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1801
* Bleu: 0.2081
* Gen Len: 18.156
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ahforoughi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | ahforoughi/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-04-16T22:11:29+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: ahforoughi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ahforoughi/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: ahforoughi/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Metaspectral/Tai
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tai-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tai-i1-GGUF/resolve/main/Tai.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "Metaspectral/Tai", "quantized_by": "mradermacher"} | mradermacher/Tai-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Metaspectral/Tai",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:12:24+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-Metaspectral/Tai #license-llama2 #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-Metaspectral/Tai #license-llama2 #endpoints_compatible #region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tapt_seq_bn_amazon_helpfulness_classification_model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3322
- Accuracy: 0.866
- F1 Macro: 0.5934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.3392 | 1.0 | 1563 | 0.3329 | 0.859 | 0.5394 |
| 0.3347 | 2.0 | 3126 | 0.3382 | 0.863 | 0.5814 |
| 0.3305 | 3.0 | 4689 | 0.3322 | 0.866 | 0.5934 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "tapt_seq_bn_amazon_helpfulness_classification_model", "results": []}]} | BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_model | null | [
"tensorboard",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"region:us"
] | null | 2024-04-16T22:12:53+00:00 | [] | [] | TAGS
#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us
| tapt\_seq\_bn\_amazon\_helpfulness\_classification\_model
=========================================================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3322
* Accuracy: 0.866
* F1 Macro: 0.5934
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hermes-mistral-7b-diataxis
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "NousResearch/Hermes-2-Pro-Mistral-7B", "model-index": [{"name": "hermes-mistral-7b-diataxis", "results": []}]} | enzokro/hermes-mistral-7b-diataxis | null | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T22:14:24+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #mistral #trl #sft #generated_from_trainer #dataset-generator #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #license-apache-2.0 #region-us
|
# hermes-mistral-7b-diataxis
This model is a fine-tuned version of NousResearch/Hermes-2-Pro-Mistral-7B on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"# hermes-mistral-7b-diataxis\n\nThis model is a fine-tuned version of NousResearch/Hermes-2-Pro-Mistral-7B on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #mistral #trl #sft #generated_from_trainer #dataset-generator #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #license-apache-2.0 #region-us \n",
"# hermes-mistral-7b-diataxis\n\nThis model is a fine-tuned version of NousResearch/Hermes-2-Pro-Mistral-7B on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-pixelcopter-1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "26.40 +/- 32.44", "name": "mean_reward", "verified": false}]}]}]} | nvasko/Reinforce-pixelcopter-1 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-04-16T22:14:31+00:00 | [] | [] | TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
| [
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] | [
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3221
- F1 Score: 0.5982
- Accuracy: 0.5987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.599 | 50.0 | 200 | 0.8257 | 0.6063 | 0.6069 |
| 0.2992 | 100.0 | 400 | 1.2253 | 0.6138 | 0.6150 |
| 0.1688 | 150.0 | 600 | 1.4774 | 0.6061 | 0.6069 |
| 0.1164 | 200.0 | 800 | 1.7828 | 0.5978 | 0.6052 |
| 0.091 | 250.0 | 1000 | 1.8064 | 0.6108 | 0.6117 |
| 0.0764 | 300.0 | 1200 | 1.9335 | 0.5973 | 0.5971 |
| 0.0645 | 350.0 | 1400 | 1.9893 | 0.6067 | 0.6085 |
| 0.0556 | 400.0 | 1600 | 2.0162 | 0.6037 | 0.6036 |
| 0.0514 | 450.0 | 1800 | 2.1402 | 0.5941 | 0.5938 |
| 0.0435 | 500.0 | 2000 | 2.2364 | 0.5951 | 0.5954 |
| 0.0395 | 550.0 | 2200 | 2.3967 | 0.5917 | 0.5922 |
| 0.0363 | 600.0 | 2400 | 2.2578 | 0.5906 | 0.5905 |
| 0.033 | 650.0 | 2600 | 2.4658 | 0.5882 | 0.5889 |
| 0.0311 | 700.0 | 2800 | 2.6630 | 0.5986 | 0.5987 |
| 0.0293 | 750.0 | 3000 | 2.5625 | 0.5972 | 0.5971 |
| 0.0271 | 800.0 | 3200 | 2.6054 | 0.5924 | 0.5922 |
| 0.0247 | 850.0 | 3400 | 2.6773 | 0.5860 | 0.5873 |
| 0.0228 | 900.0 | 3600 | 2.8256 | 0.5758 | 0.5775 |
| 0.0224 | 950.0 | 3800 | 2.5880 | 0.5932 | 0.5938 |
| 0.0208 | 1000.0 | 4000 | 2.5665 | 0.5859 | 0.5856 |
| 0.0205 | 1050.0 | 4200 | 2.7948 | 0.5922 | 0.5922 |
| 0.021 | 1100.0 | 4400 | 2.8708 | 0.5801 | 0.5808 |
| 0.0202 | 1150.0 | 4600 | 2.9003 | 0.5987 | 0.5987 |
| 0.0187 | 1200.0 | 4800 | 2.7679 | 0.6028 | 0.6036 |
| 0.0181 | 1250.0 | 5000 | 2.7986 | 0.5875 | 0.5873 |
| 0.0176 | 1300.0 | 5200 | 2.7506 | 0.6038 | 0.6036 |
| 0.0169 | 1350.0 | 5400 | 2.8018 | 0.6086 | 0.6085 |
| 0.0159 | 1400.0 | 5600 | 2.9163 | 0.6102 | 0.6101 |
| 0.0165 | 1450.0 | 5800 | 2.9603 | 0.5886 | 0.5889 |
| 0.0153 | 1500.0 | 6000 | 2.8451 | 0.6039 | 0.6036 |
| 0.0145 | 1550.0 | 6200 | 2.8933 | 0.6022 | 0.6020 |
| 0.0152 | 1600.0 | 6400 | 3.0017 | 0.5899 | 0.5922 |
| 0.0145 | 1650.0 | 6600 | 2.7772 | 0.5923 | 0.5922 |
| 0.0147 | 1700.0 | 6800 | 3.0188 | 0.5991 | 0.6003 |
| 0.0139 | 1750.0 | 7000 | 3.0917 | 0.6085 | 0.6085 |
| 0.013 | 1800.0 | 7200 | 2.9142 | 0.5922 | 0.5922 |
| 0.0134 | 1850.0 | 7400 | 3.0261 | 0.5987 | 0.5987 |
| 0.0127 | 1900.0 | 7600 | 3.0139 | 0.5964 | 0.5987 |
| 0.0125 | 1950.0 | 7800 | 3.0335 | 0.6038 | 0.6036 |
| 0.0123 | 2000.0 | 8000 | 3.2824 | 0.5955 | 0.5954 |
| 0.012 | 2050.0 | 8200 | 3.1187 | 0.5983 | 0.5987 |
| 0.0116 | 2100.0 | 8400 | 3.2038 | 0.5999 | 0.6003 |
| 0.0118 | 2150.0 | 8600 | 3.0883 | 0.5980 | 0.5987 |
| 0.0116 | 2200.0 | 8800 | 3.1863 | 0.6021 | 0.6020 |
| 0.0113 | 2250.0 | 9000 | 3.0872 | 0.5957 | 0.5971 |
| 0.0103 | 2300.0 | 9200 | 3.2129 | 0.5972 | 0.5971 |
| 0.0107 | 2350.0 | 9400 | 3.1408 | 0.5952 | 0.5954 |
| 0.0107 | 2400.0 | 9600 | 3.1199 | 0.5969 | 0.5971 |
| 0.0104 | 2450.0 | 9800 | 3.1445 | 0.5983 | 0.5987 |
| 0.0104 | 2500.0 | 10000 | 3.1618 | 0.5985 | 0.5987 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:15:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_prom\_prom\_300\_tata-seqsight\_32768\_512\_43M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3221
* F1 Score: 0.5982
* Accuracy: 0.5987
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5217
- F1 Score: 0.8331
- Accuracy: 0.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5779 | 9.52 | 200 | 0.4910 | 0.7638 | 0.7650 |
| 0.4643 | 19.05 | 400 | 0.4594 | 0.7870 | 0.7873 |
| 0.4124 | 28.57 | 600 | 0.4503 | 0.7976 | 0.7976 |
| 0.367 | 38.1 | 800 | 0.4425 | 0.8056 | 0.8057 |
| 0.324 | 47.62 | 1000 | 0.4449 | 0.8190 | 0.8191 |
| 0.2845 | 57.14 | 1200 | 0.4212 | 0.8252 | 0.8253 |
| 0.2571 | 66.67 | 1400 | 0.4423 | 0.8268 | 0.8268 |
| 0.236 | 76.19 | 1600 | 0.4559 | 0.8257 | 0.8259 |
| 0.2198 | 85.71 | 1800 | 0.4883 | 0.8233 | 0.8236 |
| 0.2049 | 95.24 | 2000 | 0.4816 | 0.8259 | 0.8263 |
| 0.1948 | 104.76 | 2200 | 0.4605 | 0.8276 | 0.8278 |
| 0.1843 | 114.29 | 2400 | 0.4812 | 0.8260 | 0.8265 |
| 0.1756 | 123.81 | 2600 | 0.4816 | 0.8375 | 0.8376 |
| 0.1666 | 133.33 | 2800 | 0.5154 | 0.8295 | 0.8298 |
| 0.1608 | 142.86 | 3000 | 0.5224 | 0.8284 | 0.8289 |
| 0.1538 | 152.38 | 3200 | 0.5356 | 0.8328 | 0.8331 |
| 0.15 | 161.9 | 3400 | 0.5456 | 0.8318 | 0.8321 |
| 0.1448 | 171.43 | 3600 | 0.5515 | 0.8322 | 0.8325 |
| 0.1415 | 180.95 | 3800 | 0.5074 | 0.8444 | 0.8444 |
| 0.1359 | 190.48 | 4000 | 0.5512 | 0.8279 | 0.8283 |
| 0.1324 | 200.0 | 4200 | 0.5171 | 0.8413 | 0.8413 |
| 0.1291 | 209.52 | 4400 | 0.5476 | 0.8330 | 0.8332 |
| 0.1276 | 219.05 | 4600 | 0.5763 | 0.8270 | 0.8274 |
| 0.1248 | 228.57 | 4800 | 0.5617 | 0.8313 | 0.8315 |
| 0.1218 | 238.1 | 5000 | 0.6029 | 0.8282 | 0.8287 |
| 0.1197 | 247.62 | 5200 | 0.5999 | 0.8257 | 0.8263 |
| 0.1178 | 257.14 | 5400 | 0.5682 | 0.8340 | 0.8342 |
| 0.1144 | 266.67 | 5600 | 0.5866 | 0.8322 | 0.8325 |
| 0.1136 | 276.19 | 5800 | 0.5854 | 0.8312 | 0.8315 |
| 0.1109 | 285.71 | 6000 | 0.5936 | 0.8327 | 0.8331 |
| 0.1097 | 295.24 | 6200 | 0.5807 | 0.8358 | 0.8359 |
| 0.1076 | 304.76 | 6400 | 0.5966 | 0.8359 | 0.8361 |
| 0.1074 | 314.29 | 6600 | 0.6074 | 0.8290 | 0.8295 |
| 0.1058 | 323.81 | 6800 | 0.6044 | 0.8324 | 0.8327 |
| 0.1043 | 333.33 | 7000 | 0.5907 | 0.8357 | 0.8359 |
| 0.1035 | 342.86 | 7200 | 0.6122 | 0.8318 | 0.8321 |
| 0.1015 | 352.38 | 7400 | 0.6156 | 0.8332 | 0.8334 |
| 0.1015 | 361.9 | 7600 | 0.6027 | 0.8349 | 0.8351 |
| 0.0983 | 371.43 | 7800 | 0.6057 | 0.8357 | 0.8359 |
| 0.0987 | 380.95 | 8000 | 0.6079 | 0.8327 | 0.8329 |
| 0.0994 | 390.48 | 8200 | 0.6066 | 0.8333 | 0.8336 |
| 0.0981 | 400.0 | 8400 | 0.6156 | 0.8330 | 0.8332 |
| 0.0959 | 409.52 | 8600 | 0.6297 | 0.8319 | 0.8323 |
| 0.0957 | 419.05 | 8800 | 0.6027 | 0.8330 | 0.8332 |
| 0.0953 | 428.57 | 9000 | 0.6083 | 0.8317 | 0.8319 |
| 0.0949 | 438.1 | 9200 | 0.6205 | 0.8329 | 0.8332 |
| 0.0932 | 447.62 | 9400 | 0.6148 | 0.8338 | 0.8340 |
| 0.0935 | 457.14 | 9600 | 0.6175 | 0.8328 | 0.8331 |
| 0.0926 | 466.67 | 9800 | 0.6251 | 0.8324 | 0.8327 |
| 0.0929 | 476.19 | 10000 | 0.6206 | 0.8336 | 0.8338 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:15:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_32768\_512\_43M-L32\_all
===============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5217
* F1 Score: 0.8331
* Accuracy: 0.8331
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | setfit |
# SetFit with d0rj/ruRoberta-distilled
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [d0rj/ruRoberta-distilled](https://huggingface.co/d0rj/ruRoberta-distilled) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [d0rj/ruRoberta-distilled](https://huggingface.co/d0rj/ruRoberta-distilled)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 514 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'13.05.2022 выполнено МСКТ ОГК, заключение: КТ-картина может соответствовать пневмонии: двусторонняя полисегментарная.'</li><li>'ПЦР Мазок на COVID-19 от 12.02.2021 - положительный.'</li><li>'ПТИ от августа 2022 года – 65%.'</li></ul> |
| 0 | <ul><li>'Артериальное давление 120/80 мм.рт.ст.'</li><li>'Т тела 36.7.'</li><li>'находился в 3 тер.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6906 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("DimasikKurd/ruRoberta-distilled-med-kd")
# Run inference
preds = model("Наследственность не отягощена .")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 9.6726 | 72 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 16 |
| 1 | 16 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: 4903
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0588 | 1 | 0.4088 | - |
| 2.9412 | 50 | 0.0046 | - |
| 0.0002 | 1 | 0.1364 | - |
| 0.0102 | 50 | 0.0369 | - |
| 0.0204 | 100 | 0.0035 | - |
| 0.0306 | 150 | 0.001 | - |
| 0.0408 | 200 | 0.0005 | - |
| 0.0510 | 250 | 0.0003 | - |
| 0.0612 | 300 | 0.0002 | - |
| 0.0714 | 350 | 0.0002 | - |
| 0.0816 | 400 | 0.0001 | - |
| 0.0918 | 450 | 0.0001 | - |
| 0.1020 | 500 | 0.0001 | - |
| 0.1122 | 550 | 0.0001 | - |
| 0.1223 | 600 | 0.0 | - |
| 0.1325 | 650 | 0.0 | - |
| 0.1427 | 700 | 0.0001 | - |
| 0.1529 | 750 | 0.0 | - |
| 0.1631 | 800 | 0.0 | - |
| 0.1733 | 850 | 0.0 | - |
| 0.1835 | 900 | 0.0 | - |
| 0.1937 | 950 | 0.0 | - |
| 0.2039 | 1000 | 0.0 | - |
| 0.2141 | 1050 | 0.0 | - |
| 0.2243 | 1100 | 0.0 | - |
| 0.2345 | 1150 | 0.0 | - |
| 0.2447 | 1200 | 0.0 | - |
| 0.2549 | 1250 | 0.0 | - |
| 0.2651 | 1300 | 0.0 | - |
| 0.2753 | 1350 | 0.0 | - |
| 0.2855 | 1400 | 0.0 | - |
| 0.2957 | 1450 | 0.0 | - |
| 0.3059 | 1500 | 0.0 | - |
| 0.3161 | 1550 | 0.0 | - |
| 0.3263 | 1600 | 0.0 | - |
| 0.3365 | 1650 | 0.0 | - |
| 0.3467 | 1700 | 0.0 | - |
| 0.3569 | 1750 | 0.0 | - |
| 0.3670 | 1800 | 0.0 | - |
| 0.3772 | 1850 | 0.0 | - |
| 0.3874 | 1900 | 0.0 | - |
| 0.3976 | 1950 | 0.0 | - |
| 0.4078 | 2000 | 0.0 | - |
| 0.4180 | 2050 | 0.0 | - |
| 0.4282 | 2100 | 0.0 | - |
| 0.4384 | 2150 | 0.0 | - |
| 0.4486 | 2200 | 0.0 | - |
| 0.4588 | 2250 | 0.0 | - |
| 0.4690 | 2300 | 0.0 | - |
| 0.4792 | 2350 | 0.0 | - |
| 0.4894 | 2400 | 0.0 | - |
| 0.4996 | 2450 | 0.0 | - |
| 0.5098 | 2500 | 0.0 | - |
| 0.5200 | 2550 | 0.0 | - |
| 0.5302 | 2600 | 0.0 | - |
| 0.5404 | 2650 | 0.0 | - |
| 0.5506 | 2700 | 0.0 | - |
| 0.5608 | 2750 | 0.0 | - |
| 0.5710 | 2800 | 0.0 | - |
| 0.5812 | 2850 | 0.0 | - |
| 0.5914 | 2900 | 0.0 | - |
| 0.6015 | 2950 | 0.0 | - |
| 0.6117 | 3000 | 0.0 | - |
| 0.6219 | 3050 | 0.0 | - |
| 0.6321 | 3100 | 0.0 | - |
| 0.6423 | 3150 | 0.0 | - |
| 0.6525 | 3200 | 0.0 | - |
| 0.6627 | 3250 | 0.0 | - |
| 0.6729 | 3300 | 0.0 | - |
| 0.6831 | 3350 | 0.0 | - |
| 0.6933 | 3400 | 0.0 | - |
| 0.7035 | 3450 | 0.0 | - |
| 0.7137 | 3500 | 0.0 | - |
| 0.7239 | 3550 | 0.0 | - |
| 0.7341 | 3600 | 0.0 | - |
| 0.7443 | 3650 | 0.0 | - |
| 0.7545 | 3700 | 0.0 | - |
| 0.7647 | 3750 | 0.0 | - |
| 0.7749 | 3800 | 0.0 | - |
| 0.7851 | 3850 | 0.0 | - |
| 0.7953 | 3900 | 0.0 | - |
| 0.8055 | 3950 | 0.0 | - |
| 0.8157 | 4000 | 0.0 | - |
| 0.8259 | 4050 | 0.0 | - |
| 0.8361 | 4100 | 0.0 | - |
| 0.8462 | 4150 | 0.0 | - |
| 0.8564 | 4200 | 0.0 | - |
| 0.8666 | 4250 | 0.0 | - |
| 0.8768 | 4300 | 0.0 | - |
| 0.8870 | 4350 | 0.0 | - |
| 0.8972 | 4400 | 0.0 | - |
| 0.9074 | 4450 | 0.0 | - |
| 0.9176 | 4500 | 0.0 | - |
| 0.9278 | 4550 | 0.0 | - |
| 0.9380 | 4600 | 0.0 | - |
| 0.9482 | 4650 | 0.0 | - |
| 0.9584 | 4700 | 0.0 | - |
| 0.9686 | 4750 | 0.0 | - |
| 0.9788 | 4800 | 0.0 | - |
| 0.9890 | 4850 | 0.0 | - |
| 0.9992 | 4900 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.6.1
- Transformers: 4.38.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "widget": [{"text": "\u041d\u0430\u0441\u043b\u0435\u0434\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0441\u0442\u044c \u043d\u0435 \u043e\u0442\u044f\u0433\u043e\u0449\u0435\u043d\u0430 ."}, {"text": "\u0430\u043d\u0430\u043c\u043d\u0435\u0437: \u0432 \u0442\u0435\u0447\u0435\u043d\u0438\u0435 3-\u0445 \u043b\u0435\u0442 \u0432 \u044d\u043f\u0438\u0434."}, {"text": "\u041a\u043e\u0441\u0442\u043d\u043e-\u043c\u044b\u0448\u0435\u0447\u043d\u044b\u0439 \u0430\u043f\u043f\u0430\u0440\u0430\u0442 \u2013 \u0431\u0435\u0437 \u043e\u0441\u043e\u0431\u0435\u043d\u043d\u043e\u0441\u0442\u0435\u0439."}, {"text": "\u0413\u043b\u0438\u043a\u0435\u043c\u0438\u044f \u0432 \u0434\u0435\u0431\u044e\u0442\u0435 \u0437\u0430\u0431\u043e\u043b\u0435\u0432\u0430\u043d\u0438\u044f 7,2 \u043c\u043c\u043e\u043b\u044c/\u043b."}, {"text": "\u0412 \u043f\u0441\u0438\u0445\u043e\u044d\u043c\u043e\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u043c \u0441\u0442\u0430\u0442\u0443\u0441\u0435 \u0441\u0442\u0430\u0431\u0438\u043b\u044c\u043d\u0430 ."}], "pipeline_tag": "text-classification", "inference": true, "base_model": "d0rj/ruRoberta-distilled", "model-index": [{"name": "SetFit with d0rj/ruRoberta-distilled", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.6906187624750499, "name": "Accuracy"}]}]}]} | DimasikKurd/ruRoberta-distilled-med-kd | null | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:d0rj/ruRoberta-distilled",
"model-index",
"region:us"
] | null | 2024-04-16T22:15:36+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #roberta #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-d0rj/ruRoberta-distilled #model-index #region-us
| SetFit with d0rj/ruRoberta-distilled
====================================
This is a SetFit model that can be used for Text Classification. This SetFit model uses d0rj/ruRoberta-distilled as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Sentence Transformer body: d0rj/ruRoberta-distilled
* Classification head: a LogisticRegression instance
* Maximum Sequence Length: 514 tokens
* Number of Classes: 2 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (16, 16)
* num\_epochs: (1, 1)
* max\_steps: 4903
* sampling\_strategy: oversampling
* body\_learning\_rate: (2e-05, 1e-05)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: False
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.12
* SetFit: 1.0.3
* Sentence Transformers: 2.6.1
* Transformers: 4.38.2
* PyTorch: 2.2.1+cu121
* Datasets: 2.18.0
* Tokenizers: 0.15.2
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: d0rj/ruRoberta-distilled\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 514 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: 4903\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.6.1\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #roberta #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-d0rj/ruRoberta-distilled #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: d0rj/ruRoberta-distilled\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 514 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: 4903\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.6.1\n* Transformers: 4.38.2\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] |
null | transformers |
# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_Cyber_Child`](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_Child) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_Child) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF --model mixtral_ai_cyber_child.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF --model mixtral_ai_cyber_child.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cyber_child.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "Local"} | LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Local",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:16:04+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-Local #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_Cyber_Child' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_Cyber_Child' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #llama-cpp #gguf-my-repo #en #base_model-Local #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_Cyber_Child-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_Cyber_Child' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
<img src="https://huggingface.co/Orbina/Orbita-v0.1/resolve/main/orbita.png"
alt="Orbita LLM" width="500"/>
# Orbita-v0.1
This model is a Turkish Large Language Model with multiple abilites across different dimensions on the Turkish Language covering a variety of Tasks like Coding, Math, etc...
This model is an extended version of a Qwen-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish dataset carefully annotated to carry out turkish instructions in an accurate and organized manner. This model was fully finetuned extensively on 8x H100 GPU's for 2 days using a carefully annotated Turkish dataset.
## Model Details
- **Base Model**: Qwen 14B based LLM
- **Training Dataset**: Annotated Turkish Dataset
- **Training Method**: Full Finetuning
## Usage Examples
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Orbina/Orbita-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Orbina/Orbita-v0.1")
prompt = "Python'da ekrana 'Merhaba Dünya' nasıl yazılır?"
messages = [
{"role": "system", "content": "Sen, Orbina AI tarafından üretilen ve verilen talimatları takip ederek en iyi cevabı üretmeye çalışan yardımcı bir yapay zekasın."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
temperature=0.3,
top_k=50,
top_p=0.9,
max_new_tokens=512,
repetition_penalty=1,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Example Generations
```
Prompt:
Türkiye'deki finansal kurumlar, yapay zeka ile birlikte gelen veri güvenliği risklerini nasıl yönetiyor, 3 örnek verir misin?
Cevap:
Türkiye'deki finansal kurumlar, yapay zeka (AI) teknolojilerinin getirdiği veri güvenliği riskleri çeşitli yöntemlerle ele almakta ve bu riskleri azaltmaya çalışmaktadır. İşte üç örnek:
1. **Veri Şifreleme:** Finansal kuruluşlar, müşteri bilgileri gibi hassas verileri şifrelemeyi kullanarak veri ihlallerine karşı koruma sağlar. AES (Advanced Encryption Standard), RSA (Rivest-Shamir-Adleman) veya ECC (Elliptik Eğri Kriptografisi) gibi güçlü şifreleme algoritmaları, verilerin okunmasını engelleyebilir.
2. **Anomali Tespit Sistemleri:** Yapay zeka destekli anomali tespit sistemleri, beklenmedik davranışları algılayabilir ve güvenlik ekiplerini uyarır. Bu sistemler, makine öğrenimi modelleri sayesinde zaman içinde daha etkili hale gelebilir ve dolandırıcılık faaliyetlerini önlemede önemli rol oynar.
3. **Sürekli İzleme ve Güvenlik Politikası Uygulama:** Türkiye'deki finansal kurumlar, sürekli izleme ve güvenlik politikalarını uygularak veri güvenliğini sürdürür. Bu süreçlerde düzenli güvenlik denetimleri yapılır ve tüm personelin güvenlik konusunda eğitildiği göz önünde bulundurulur. Ayrıca, müşterilere de güvenlik protokollerini anlatan eğitimler sunulur.
Bu örnekler, Türkiye'deki finansal kurumların yapay zeka ile ilgili veri güvenliği risklerini yönetmek için kullandıkları bazı stratejileri göstermektedir. Ancak herhangi bir güvenlik sistemi mükemmel değildir ve bu nedenle kurumlar da sürekli olarak güvenlik durumlarını değerlendirip iyileştirmeler yapmaktadırlar.
```
```
Prompt:
Python'da ekrana "Merhaba Dünya" nasıl yazılır?
Cevap:
Python'da ekrana "Merhaba Dünya" yazmak için `print` fonksiyonunu kullanabilirsiniz. İşte basit bir örnek:
``python
print("Merhaba Dünya")
``
Bu kodu çalıştırdığınızda, konsol ekranında "Merhaba Dünya" metnini göreceksiniz. `print` fonksiyonu, verilen argümanı (bu durumda "Merhaba Dünya") alır ve onu çıktı olarak verir.
```
# [Open LLM Turkish Leaderboard Evaluation Results](https://huggingface.co/spaces/malhajar/OpenLLMTurkishLeaderboard)
| Metric |Value|
|---------------------------------|----:|
|Avg. |49.47|
|AI2 Reasoning Challenge_tr |41.97|
|HellaSwag_tr |48.00|
|MMLU_tr |49.51|
|TruthfulQA_tr |50.78|
|Winogrande _tr |56.16|
|GSM8k_tr |50.41|
| {"language": ["tr"], "license": "apache-2.0", "model-index": [{"name": "Orbita-v0.1", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge TR", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc", "value": 41.97, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag TR", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc", "value": 48, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU TR", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 49.51, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA TR", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc", "value": 50.78, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande TR", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 56.16, "name": "accuracy"}]}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k TR", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 50.41, "name": "accuracy"}]}]}]} | Orbina/Orbita-v0.1 | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"tr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:17:10+00:00 | [] | [
"tr"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #tr #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| <img src="URL
alt="Orbita LLM" width="500"/>
Orbita-v0.1
===========
This model is a Turkish Large Language Model with multiple abilites across different dimensions on the Turkish Language covering a variety of Tasks like Coding, Math, etc...
This model is an extended version of a Qwen-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish dataset carefully annotated to carry out turkish instructions in an accurate and organized manner. This model was fully finetuned extensively on 8x H100 GPU's for 2 days using a carefully annotated Turkish dataset.
Model Details
-------------
* Base Model: Qwen 14B based LLM
* Training Dataset: Annotated Turkish Dataset
* Training Method: Full Finetuning
Usage Examples
--------------
Example Generations
-------------------
Open LLM Turkish Leaderboard Evaluation Results
===============================================
| [] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #tr #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6102
- F1 Score: 0.6853
- Accuracy: 0.6855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6574 | 8.33 | 200 | 0.6252 | 0.6479 | 0.6481 |
| 0.6007 | 16.67 | 400 | 0.6054 | 0.6741 | 0.6742 |
| 0.5655 | 25.0 | 600 | 0.5938 | 0.6869 | 0.6875 |
| 0.5366 | 33.33 | 800 | 0.6117 | 0.6893 | 0.6895 |
| 0.5147 | 41.67 | 1000 | 0.6081 | 0.6873 | 0.6875 |
| 0.4983 | 50.0 | 1200 | 0.6262 | 0.6867 | 0.6875 |
| 0.4838 | 58.33 | 1400 | 0.6314 | 0.6875 | 0.6882 |
| 0.4704 | 66.67 | 1600 | 0.6195 | 0.6879 | 0.6880 |
| 0.4624 | 75.0 | 1800 | 0.6536 | 0.6863 | 0.6870 |
| 0.4533 | 83.33 | 2000 | 0.6546 | 0.6882 | 0.6885 |
| 0.4464 | 91.67 | 2200 | 0.6909 | 0.6806 | 0.6828 |
| 0.4391 | 100.0 | 2400 | 0.6527 | 0.6890 | 0.6894 |
| 0.4337 | 108.33 | 2600 | 0.7048 | 0.6835 | 0.6845 |
| 0.4274 | 116.67 | 2800 | 0.6818 | 0.6817 | 0.6834 |
| 0.421 | 125.0 | 3000 | 0.6672 | 0.6813 | 0.6819 |
| 0.416 | 133.33 | 3200 | 0.7051 | 0.6775 | 0.6791 |
| 0.4101 | 141.67 | 3400 | 0.6947 | 0.6752 | 0.6772 |
| 0.4044 | 150.0 | 3600 | 0.6825 | 0.6853 | 0.6856 |
| 0.3969 | 158.33 | 3800 | 0.7112 | 0.6729 | 0.6755 |
| 0.3946 | 166.67 | 4000 | 0.7147 | 0.6775 | 0.6787 |
| 0.3878 | 175.0 | 4200 | 0.7278 | 0.6651 | 0.6699 |
| 0.3839 | 183.33 | 4400 | 0.7329 | 0.6832 | 0.6843 |
| 0.3771 | 191.67 | 4600 | 0.7318 | 0.6811 | 0.6821 |
| 0.3742 | 200.0 | 4800 | 0.7245 | 0.6725 | 0.6752 |
| 0.3685 | 208.33 | 5000 | 0.7346 | 0.6792 | 0.6801 |
| 0.3638 | 216.67 | 5200 | 0.7091 | 0.6745 | 0.6758 |
| 0.3598 | 225.0 | 5400 | 0.7389 | 0.6768 | 0.6780 |
| 0.3537 | 233.33 | 5600 | 0.7540 | 0.6762 | 0.6782 |
| 0.3507 | 241.67 | 5800 | 0.7543 | 0.6750 | 0.6772 |
| 0.3463 | 250.0 | 6000 | 0.7348 | 0.6777 | 0.6785 |
| 0.343 | 258.33 | 6200 | 0.7512 | 0.6720 | 0.6743 |
| 0.3395 | 266.67 | 6400 | 0.7809 | 0.6689 | 0.6716 |
| 0.337 | 275.0 | 6600 | 0.7572 | 0.6747 | 0.6758 |
| 0.3327 | 283.33 | 6800 | 0.7711 | 0.6703 | 0.6723 |
| 0.3304 | 291.67 | 7000 | 0.7803 | 0.6715 | 0.6735 |
| 0.3277 | 300.0 | 7200 | 0.7630 | 0.6765 | 0.6775 |
| 0.3222 | 308.33 | 7400 | 0.7903 | 0.6645 | 0.6679 |
| 0.3209 | 316.67 | 7600 | 0.7897 | 0.6749 | 0.6765 |
| 0.3164 | 325.0 | 7800 | 0.7750 | 0.6718 | 0.6736 |
| 0.3162 | 333.33 | 8000 | 0.7806 | 0.6717 | 0.6733 |
| 0.3132 | 341.67 | 8200 | 0.7812 | 0.6760 | 0.6774 |
| 0.3114 | 350.0 | 8400 | 0.8030 | 0.6733 | 0.675 |
| 0.3084 | 358.33 | 8600 | 0.7994 | 0.6754 | 0.6769 |
| 0.3085 | 366.67 | 8800 | 0.7910 | 0.6753 | 0.6765 |
| 0.3063 | 375.0 | 9000 | 0.8089 | 0.6713 | 0.6731 |
| 0.3067 | 383.33 | 9200 | 0.7927 | 0.6751 | 0.6764 |
| 0.3038 | 391.67 | 9400 | 0.7948 | 0.6711 | 0.6726 |
| 0.3046 | 400.0 | 9600 | 0.8084 | 0.6703 | 0.6725 |
| 0.302 | 408.33 | 9800 | 0.8047 | 0.6734 | 0.675 |
| 0.3013 | 416.67 | 10000 | 0.8083 | 0.6713 | 0.6730 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:20:04+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_32768\_512\_43M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6102
* F1 Score: 0.6853
* Accuracy: 0.6855
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5931
- F1 Score: 0.7093
- Accuracy: 0.7093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.652 | 9.52 | 200 | 0.6030 | 0.6722 | 0.6729 |
| 0.5896 | 19.05 | 400 | 0.5785 | 0.6993 | 0.6996 |
| 0.5462 | 28.57 | 600 | 0.5918 | 0.7115 | 0.7125 |
| 0.5101 | 38.1 | 800 | 0.5805 | 0.7181 | 0.7181 |
| 0.4841 | 47.62 | 1000 | 0.6089 | 0.7002 | 0.7025 |
| 0.4646 | 57.14 | 1200 | 0.6196 | 0.7018 | 0.7036 |
| 0.4503 | 66.67 | 1400 | 0.6412 | 0.7004 | 0.7032 |
| 0.4372 | 76.19 | 1600 | 0.6242 | 0.7114 | 0.7119 |
| 0.4264 | 85.71 | 1800 | 0.6359 | 0.7068 | 0.7085 |
| 0.4175 | 95.24 | 2000 | 0.6593 | 0.7009 | 0.7042 |
| 0.4083 | 104.76 | 2200 | 0.6448 | 0.7093 | 0.7096 |
| 0.399 | 114.29 | 2400 | 0.6594 | 0.7001 | 0.7027 |
| 0.3933 | 123.81 | 2600 | 0.6638 | 0.7073 | 0.7079 |
| 0.3842 | 133.33 | 2800 | 0.6732 | 0.7048 | 0.7059 |
| 0.3779 | 142.86 | 3000 | 0.6901 | 0.6983 | 0.7002 |
| 0.3706 | 152.38 | 3200 | 0.6892 | 0.7040 | 0.7051 |
| 0.3631 | 161.9 | 3400 | 0.7007 | 0.7019 | 0.7042 |
| 0.3555 | 171.43 | 3600 | 0.7479 | 0.6935 | 0.6974 |
| 0.3479 | 180.95 | 3800 | 0.7151 | 0.6876 | 0.6930 |
| 0.3409 | 190.48 | 4000 | 0.6988 | 0.6914 | 0.6932 |
| 0.3338 | 200.0 | 4200 | 0.6939 | 0.7056 | 0.7059 |
| 0.3278 | 209.52 | 4400 | 0.7377 | 0.6993 | 0.7011 |
| 0.3223 | 219.05 | 4600 | 0.7363 | 0.6991 | 0.7008 |
| 0.3175 | 228.57 | 4800 | 0.7309 | 0.7020 | 0.7034 |
| 0.31 | 238.1 | 5000 | 0.7685 | 0.6929 | 0.6963 |
| 0.3054 | 247.62 | 5200 | 0.7470 | 0.6976 | 0.6991 |
| 0.2985 | 257.14 | 5400 | 0.7656 | 0.6964 | 0.6981 |
| 0.2947 | 266.67 | 5600 | 0.7930 | 0.6927 | 0.6957 |
| 0.2897 | 276.19 | 5800 | 0.7930 | 0.6943 | 0.6970 |
| 0.2857 | 285.71 | 6000 | 0.7591 | 0.6962 | 0.6974 |
| 0.2807 | 295.24 | 6200 | 0.8115 | 0.6983 | 0.7002 |
| 0.2768 | 304.76 | 6400 | 0.8083 | 0.6903 | 0.6929 |
| 0.2738 | 314.29 | 6600 | 0.8111 | 0.6916 | 0.6942 |
| 0.2695 | 323.81 | 6800 | 0.8185 | 0.6953 | 0.6968 |
| 0.266 | 333.33 | 7000 | 0.8053 | 0.6876 | 0.6902 |
| 0.2623 | 342.86 | 7200 | 0.8061 | 0.6961 | 0.6976 |
| 0.2614 | 352.38 | 7400 | 0.8246 | 0.6944 | 0.6961 |
| 0.2561 | 361.9 | 7600 | 0.8124 | 0.6953 | 0.6968 |
| 0.2538 | 371.43 | 7800 | 0.8364 | 0.6977 | 0.6995 |
| 0.2521 | 380.95 | 8000 | 0.8313 | 0.6929 | 0.6947 |
| 0.2481 | 390.48 | 8200 | 0.8541 | 0.6912 | 0.6934 |
| 0.2474 | 400.0 | 8400 | 0.8415 | 0.6951 | 0.6968 |
| 0.2455 | 409.52 | 8600 | 0.8493 | 0.6937 | 0.6959 |
| 0.2436 | 419.05 | 8800 | 0.8542 | 0.6888 | 0.6914 |
| 0.2416 | 428.57 | 9000 | 0.8317 | 0.6955 | 0.6966 |
| 0.241 | 438.1 | 9200 | 0.8478 | 0.6912 | 0.6930 |
| 0.2394 | 447.62 | 9400 | 0.8550 | 0.6944 | 0.6963 |
| 0.2391 | 457.14 | 9600 | 0.8561 | 0.6924 | 0.6946 |
| 0.2374 | 466.67 | 9800 | 0.8512 | 0.6915 | 0.6934 |
| 0.2374 | 476.19 | 10000 | 0.8564 | 0.6929 | 0.6947 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:20:43+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_32768\_512\_43M-L32\_all
================================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5931
* F1 Score: 0.7093
* Accuracy: 0.7093
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-petco-filtered_annotated-ctr
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Mse: 0.0023
- Rmse: 0.0477
- Mae: 0.0361
- R2: 0.4149
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:-------:|:--------:|
| 0.0137 | 1.0 | 24 | 0.0049 | 0.0049 | 0.0702 | 0.0533 | -0.2657 | 0.55 |
| 0.0079 | 2.0 | 48 | 0.0042 | 0.0042 | 0.0647 | 0.0533 | -0.0751 | 0.5 |
| 0.0074 | 3.0 | 72 | 0.0026 | 0.0026 | 0.0505 | 0.0388 | 0.3441 | 0.6833 |
| 0.006 | 4.0 | 96 | 0.0041 | 0.0041 | 0.0638 | 0.0544 | -0.0467 | 0.5167 |
| 0.0061 | 5.0 | 120 | 0.0027 | 0.0027 | 0.0519 | 0.0409 | 0.3082 | 0.7 |
| 0.0054 | 6.0 | 144 | 0.0025 | 0.0025 | 0.0503 | 0.0399 | 0.3498 | 0.7 |
| 0.0052 | 7.0 | 168 | 0.0038 | 0.0038 | 0.0615 | 0.0469 | 0.0298 | 0.5833 |
| 0.0074 | 8.0 | 192 | 0.0027 | 0.0027 | 0.0522 | 0.0412 | 0.3000 | 0.65 |
| 0.0049 | 9.0 | 216 | 0.0028 | 0.0028 | 0.0530 | 0.0392 | 0.2781 | 0.7333 |
| 0.0052 | 10.0 | 240 | 0.0028 | 0.0028 | 0.0526 | 0.0401 | 0.2885 | 0.7 |
| 0.0035 | 11.0 | 264 | 0.0033 | 0.0033 | 0.0572 | 0.0438 | 0.1587 | 0.7 |
| 0.0039 | 12.0 | 288 | 0.0034 | 0.0034 | 0.0581 | 0.0455 | 0.1340 | 0.65 |
| 0.0031 | 13.0 | 312 | 0.0026 | 0.0026 | 0.0512 | 0.0375 | 0.3267 | 0.75 |
| 0.0043 | 14.0 | 336 | 0.0023 | 0.0023 | 0.0477 | 0.0361 | 0.4149 | 0.75 |
| 0.0044 | 15.0 | 360 | 0.0027 | 0.0027 | 0.0524 | 0.0397 | 0.2944 | 0.7333 |
| 0.0033 | 16.0 | 384 | 0.0024 | 0.0024 | 0.0485 | 0.0356 | 0.3948 | 0.7833 |
| 0.0031 | 17.0 | 408 | 0.0033 | 0.0033 | 0.0575 | 0.0437 | 0.1517 | 0.6667 |
| 0.0033 | 18.0 | 432 | 0.0026 | 0.0026 | 0.0508 | 0.0373 | 0.3370 | 0.7667 |
| 0.0031 | 19.0 | 456 | 0.0033 | 0.0033 | 0.0571 | 0.0447 | 0.1624 | 0.6667 |
| 0.0035 | 20.0 | 480 | 0.0029 | 0.0029 | 0.0538 | 0.0410 | 0.2562 | 0.6667 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "roberta-petco-filtered_annotated-ctr", "results": []}]} | yimiwang/roberta-petco-filtered_annotated-ctr | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:22:27+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
| roberta-petco-filtered\_annotated-ctr
=====================================
This model is a fine-tuned version of FacebookAI/roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0023
* Mse: 0.0023
* Rmse: 0.0477
* Mae: 0.0361
* R2: 0.4149
* Accuracy: 0.75
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 10
* eval\_batch\_size: 10
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3368
- F1 Score: 0.6852
- Accuracy: 0.6852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5508 | 66.67 | 200 | 0.7452 | 0.6636 | 0.6639 |
| 0.2606 | 133.33 | 400 | 1.0597 | 0.6606 | 0.6607 |
| 0.1448 | 200.0 | 600 | 1.2962 | 0.6670 | 0.6672 |
| 0.0979 | 266.67 | 800 | 1.3765 | 0.6556 | 0.6558 |
| 0.0745 | 333.33 | 1000 | 1.5046 | 0.6641 | 0.6656 |
| 0.0591 | 400.0 | 1200 | 1.6142 | 0.6655 | 0.6656 |
| 0.0495 | 466.67 | 1400 | 1.5859 | 0.6669 | 0.6672 |
| 0.0431 | 533.33 | 1600 | 1.5597 | 0.6736 | 0.6737 |
| 0.0375 | 600.0 | 1800 | 1.6227 | 0.6720 | 0.6721 |
| 0.034 | 666.67 | 2000 | 1.8584 | 0.6656 | 0.6656 |
| 0.0314 | 733.33 | 2200 | 1.8048 | 0.6655 | 0.6656 |
| 0.0284 | 800.0 | 2400 | 1.7690 | 0.6749 | 0.6754 |
| 0.0267 | 866.67 | 2600 | 1.7515 | 0.6750 | 0.6754 |
| 0.0241 | 933.33 | 2800 | 2.0042 | 0.6737 | 0.6737 |
| 0.0235 | 1000.0 | 3000 | 1.9201 | 0.6802 | 0.6803 |
| 0.022 | 1066.67 | 3200 | 1.9093 | 0.6621 | 0.6623 |
| 0.0207 | 1133.33 | 3400 | 1.8718 | 0.6635 | 0.6639 |
| 0.02 | 1200.0 | 3600 | 2.0508 | 0.6748 | 0.6754 |
| 0.0193 | 1266.67 | 3800 | 1.9669 | 0.6704 | 0.6705 |
| 0.0183 | 1333.33 | 4000 | 1.9223 | 0.6759 | 0.6770 |
| 0.0181 | 1400.0 | 4200 | 2.0832 | 0.6778 | 0.6786 |
| 0.0174 | 1466.67 | 4400 | 2.0464 | 0.6770 | 0.6770 |
| 0.0171 | 1533.33 | 4600 | 1.9810 | 0.6828 | 0.6835 |
| 0.0161 | 1600.0 | 4800 | 1.9667 | 0.6671 | 0.6672 |
| 0.0157 | 1666.67 | 5000 | 2.0056 | 0.6912 | 0.6917 |
| 0.0152 | 1733.33 | 5200 | 1.9813 | 0.6835 | 0.6835 |
| 0.0143 | 1800.0 | 5400 | 2.1158 | 0.6720 | 0.6721 |
| 0.0146 | 1866.67 | 5600 | 2.0171 | 0.6818 | 0.6819 |
| 0.0134 | 1933.33 | 5800 | 1.9875 | 0.6849 | 0.6852 |
| 0.0138 | 2000.0 | 6000 | 1.9142 | 0.6785 | 0.6786 |
| 0.0133 | 2066.67 | 6200 | 1.9135 | 0.6785 | 0.6786 |
| 0.0127 | 2133.33 | 6400 | 2.0608 | 0.6751 | 0.6754 |
| 0.0123 | 2200.0 | 6600 | 2.0001 | 0.6819 | 0.6819 |
| 0.0122 | 2266.67 | 6800 | 2.0406 | 0.6882 | 0.6884 |
| 0.012 | 2333.33 | 7000 | 1.9761 | 0.6781 | 0.6786 |
| 0.0124 | 2400.0 | 7200 | 2.0452 | 0.6768 | 0.6770 |
| 0.0116 | 2466.67 | 7400 | 2.1118 | 0.6819 | 0.6819 |
| 0.0119 | 2533.33 | 7600 | 1.9930 | 0.6766 | 0.6770 |
| 0.0111 | 2600.0 | 7800 | 2.0033 | 0.6802 | 0.6803 |
| 0.0111 | 2666.67 | 8000 | 1.8848 | 0.6866 | 0.6868 |
| 0.0112 | 2733.33 | 8200 | 2.0047 | 0.6933 | 0.6933 |
| 0.0105 | 2800.0 | 8400 | 2.0523 | 0.6933 | 0.6933 |
| 0.0105 | 2866.67 | 8600 | 2.0585 | 0.6833 | 0.6835 |
| 0.0102 | 2933.33 | 8800 | 2.1231 | 0.6881 | 0.6884 |
| 0.0103 | 3000.0 | 9000 | 2.0167 | 0.6900 | 0.6900 |
| 0.0098 | 3066.67 | 9200 | 2.1389 | 0.6784 | 0.6786 |
| 0.01 | 3133.33 | 9400 | 2.1198 | 0.6817 | 0.6819 |
| 0.0094 | 3200.0 | 9600 | 2.1524 | 0.6883 | 0.6884 |
| 0.0097 | 3266.67 | 9800 | 2.0739 | 0.6800 | 0.6803 |
| 0.0097 | 3333.33 | 10000 | 2.0865 | 0.6816 | 0.6819 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:26:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_32768\_512\_43M-L32\_all
==============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3368
* F1 Score: 0.6852
* Accuracy: 0.6852
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- F1 Score: 0.7930
- Accuracy: 0.7931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6013 | 8.33 | 200 | 0.5363 | 0.7292 | 0.7314 |
| 0.5061 | 16.67 | 400 | 0.5138 | 0.7532 | 0.7549 |
| 0.4651 | 25.0 | 600 | 0.4949 | 0.7660 | 0.7662 |
| 0.4263 | 33.33 | 800 | 0.4895 | 0.7745 | 0.7747 |
| 0.3901 | 41.67 | 1000 | 0.4918 | 0.7858 | 0.7858 |
| 0.3644 | 50.0 | 1200 | 0.4899 | 0.7853 | 0.7853 |
| 0.3424 | 58.33 | 1400 | 0.4815 | 0.7878 | 0.7878 |
| 0.3223 | 66.67 | 1600 | 0.5384 | 0.7832 | 0.7834 |
| 0.3068 | 75.0 | 1800 | 0.5001 | 0.7935 | 0.7936 |
| 0.2919 | 83.33 | 2000 | 0.5076 | 0.7932 | 0.7932 |
| 0.28 | 91.67 | 2200 | 0.5376 | 0.7896 | 0.7897 |
| 0.2671 | 100.0 | 2400 | 0.5239 | 0.7861 | 0.7867 |
| 0.259 | 108.33 | 2600 | 0.5588 | 0.7911 | 0.7912 |
| 0.2514 | 116.67 | 2800 | 0.5731 | 0.7918 | 0.7921 |
| 0.2449 | 125.0 | 3000 | 0.5441 | 0.7862 | 0.7868 |
| 0.235 | 133.33 | 3200 | 0.5681 | 0.7912 | 0.7917 |
| 0.2296 | 141.67 | 3400 | 0.5793 | 0.7913 | 0.7917 |
| 0.224 | 150.0 | 3600 | 0.5647 | 0.7935 | 0.7936 |
| 0.2182 | 158.33 | 3800 | 0.5759 | 0.7952 | 0.7954 |
| 0.2131 | 166.67 | 4000 | 0.5769 | 0.7902 | 0.7909 |
| 0.2093 | 175.0 | 4200 | 0.5860 | 0.7944 | 0.7948 |
| 0.2041 | 183.33 | 4400 | 0.5956 | 0.7967 | 0.7968 |
| 0.1998 | 191.67 | 4600 | 0.6224 | 0.7907 | 0.7914 |
| 0.1967 | 200.0 | 4800 | 0.5952 | 0.7997 | 0.7998 |
| 0.1928 | 208.33 | 5000 | 0.6149 | 0.7971 | 0.7975 |
| 0.1918 | 216.67 | 5200 | 0.6137 | 0.7992 | 0.7993 |
| 0.1881 | 225.0 | 5400 | 0.5962 | 0.8016 | 0.8017 |
| 0.1851 | 233.33 | 5600 | 0.6599 | 0.7931 | 0.7934 |
| 0.183 | 241.67 | 5800 | 0.6309 | 0.7936 | 0.7941 |
| 0.1782 | 250.0 | 6000 | 0.6355 | 0.7966 | 0.7970 |
| 0.1785 | 258.33 | 6200 | 0.6361 | 0.7982 | 0.7983 |
| 0.1771 | 266.67 | 6400 | 0.6446 | 0.7936 | 0.7939 |
| 0.1743 | 275.0 | 6600 | 0.6296 | 0.7962 | 0.7965 |
| 0.1718 | 283.33 | 6800 | 0.6733 | 0.7919 | 0.7926 |
| 0.1708 | 291.67 | 7000 | 0.6524 | 0.7957 | 0.7959 |
| 0.168 | 300.0 | 7200 | 0.6677 | 0.7935 | 0.7939 |
| 0.1676 | 308.33 | 7400 | 0.6557 | 0.7946 | 0.7949 |
| 0.1657 | 316.67 | 7600 | 0.6663 | 0.7945 | 0.7948 |
| 0.1631 | 325.0 | 7800 | 0.6604 | 0.7956 | 0.7958 |
| 0.1619 | 333.33 | 8000 | 0.6593 | 0.7947 | 0.7949 |
| 0.161 | 341.67 | 8200 | 0.6667 | 0.7950 | 0.7953 |
| 0.1596 | 350.0 | 8400 | 0.6737 | 0.7930 | 0.7934 |
| 0.159 | 358.33 | 8600 | 0.6773 | 0.7924 | 0.7929 |
| 0.1585 | 366.67 | 8800 | 0.6729 | 0.7964 | 0.7968 |
| 0.1573 | 375.0 | 9000 | 0.6672 | 0.7985 | 0.7986 |
| 0.1567 | 383.33 | 9200 | 0.6649 | 0.7943 | 0.7946 |
| 0.1573 | 391.67 | 9400 | 0.6663 | 0.7947 | 0.7949 |
| 0.1564 | 400.0 | 9600 | 0.6704 | 0.7954 | 0.7956 |
| 0.1554 | 408.33 | 9800 | 0.6749 | 0.7952 | 0.7954 |
| 0.1553 | 416.67 | 10000 | 0.6744 | 0.7964 | 0.7966 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:26:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_32768\_512\_43M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6305
* F1 Score: 0.7930
* Accuracy: 0.7931
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | piercemaloney/llemma-7b-v4-finetuned-small-chunks | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:27:16+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cackerman/rewrites_mixtral8x7b_it_4bit_ft_full_big | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:27:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 8.0bpw
This is a 8.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_8.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-16T22:28:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| CodeQwen1.5-7B - EXL2 8.0bpw
============================
This is a 8.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-20-layer](https://huggingface.co/Citaman/command-r-20-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-20-layer
layer_range: [0, 19]
- model: Citaman/command-r-20-layer
layer_range: [1, 20]
merge_method: slerp
base_model: Citaman/command-r-20-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-20-layer"]} | Citaman/command-r-19-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-20-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:29:17+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-20-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-20-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-20-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-20-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-20-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 7.0bpw
This is a 7.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_7.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"7-bit",
"region:us"
] | null | 2024-04-16T22:29:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us
| CodeQwen1.5-7B - EXL2 7.0bpw
============================
This is a 7.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #7-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_usp4_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0767
- Rewards/chosen: -1.1762
- Rewards/rejected: -7.5013
- Rewards/accuracies: 0.6300
- Rewards/margins: 6.3252
- Logps/rejected: -117.1809
- Logps/chosen: -115.1588
- Logits/rejected: -0.1065
- Logits/chosen: -0.0807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0472 | 2.67 | 100 | 1.5866 | -2.4737 | -5.2244 | 0.6600 | 2.7507 | -114.6509 | -116.6005 | -0.1123 | -0.1103 |
| 0.061 | 5.33 | 200 | 2.8352 | -8.5414 | -13.8302 | 0.6600 | 5.2888 | -124.2130 | -123.3425 | -0.2214 | -0.1997 |
| 0.0022 | 8.0 | 300 | 3.6078 | -5.7355 | -11.8144 | 0.6600 | 6.0789 | -121.9732 | -120.2247 | -0.2463 | -0.2014 |
| 0.0001 | 10.67 | 400 | 4.1244 | -1.6102 | -7.8752 | 0.6300 | 6.2650 | -117.5963 | -115.6411 | -0.1230 | -0.0965 |
| 0.0 | 13.33 | 500 | 4.0644 | -1.1614 | -7.5191 | 0.6300 | 6.3577 | -117.2006 | -115.1424 | -0.1061 | -0.0806 |
| 0.0 | 16.0 | 600 | 4.0669 | -1.1412 | -7.4965 | 0.6300 | 6.3554 | -117.1756 | -115.1199 | -0.1068 | -0.0813 |
| 0.0 | 18.67 | 700 | 4.0482 | -1.1597 | -7.5269 | 0.6300 | 6.3672 | -117.2094 | -115.1405 | -0.1065 | -0.0810 |
| 0.0 | 21.33 | 800 | 4.0720 | -1.1432 | -7.5025 | 0.6300 | 6.3594 | -117.1822 | -115.1221 | -0.1067 | -0.0811 |
| 0.0 | 24.0 | 900 | 4.0691 | -1.1439 | -7.4980 | 0.6300 | 6.3541 | -117.1772 | -115.1229 | -0.1069 | -0.0810 |
| 0.0 | 26.67 | 1000 | 4.0767 | -1.1762 | -7.5013 | 0.6300 | 6.3252 | -117.1809 | -115.1588 | -0.1065 | -0.0807 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp4_dpo9", "results": []}]} | guoyu-zhang/model_hh_usp4_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-16T22:29:59+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_hh\_usp4\_dpo9
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.0767
* Rewards/chosen: -1.1762
* Rewards/rejected: -7.5013
* Rewards/accuracies: 0.6300
* Rewards/margins: 6.3252
* Logps/rejected: -117.1809
* Logps/chosen: -115.1588
* Logits/rejected: -0.1065
* Logits/chosen: -0.0807
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 6.0bpw
This is a 6.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_6.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-16T22:30:24+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
| CodeQwen1.5-7B - EXL2 6.0bpw
============================
This is a 6.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
| {"library_name": "peft"} | moneygod/jdft | null | [
"peft",
"safetensors",
"region:us"
] | null | 2024-04-16T22:30:47+00:00 | [] | [] | TAGS
#peft #safetensors #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: True\n- load_in_4bit: False\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float32",
"### Framework versions\n\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #safetensors #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: True\n- load_in_4bit: False\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: fp4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float32",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 5.0bpw
This is a 5.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_5.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null | 2024-04-16T22:31:26+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
| CodeQwen1.5-7B - EXL2 5.0bpw
============================
This is a 5.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 4.0bpw
This is a 4.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_4.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-16T22:32:23+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| CodeQwen1.5-7B - EXL2 4.0bpw
============================
This is a 4.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 3.5bpw
This is a 3.5bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_3.5bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:33:05+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| CodeQwen1.5-7B - EXL2 3.5bpw
============================
This is a 3.5bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 3.0bpw
This is a 3.0bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_3.0bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-04-16T22:33:44+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
| CodeQwen1.5-7B - EXL2 3.0bpw
============================
This is a 3.0bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 2.75bpw
This is a 2.75bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_2.75bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:34:22+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| CodeQwen1.5-7B - EXL2 2.75bpw
=============================
This is a 2.75bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
text-generation | transformers |
# CodeQwen1.5-7B - EXL2 2.5bpw
This is a 2.5bpw EXL2 quant of [Qwen/CodeQwen1.5-7B](https://huggingface.co/Qwen/CodeQwen1.5-7B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 8.0 | 10.8767 |
| 7.0 | 10.8824 |
| 6.0 | 10.8876 |
| 5.0 | 10.9341 |
| 4.0 | 11.1726 |
| 3.5 | 11.4286 |
| 3.0 | 12.3887 |
| 2.75 | 12.8403 |
| 2.5 | 13.6661 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 17,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="CodeQwen1.5-7B"
# Define variables
MODEL_DIR="models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.0 3.5 3.0 2.75 2.5)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
| {"language": ["en"], "license": "other", "tags": ["exl2", "pretrained"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "base_model": "Qwen/CodeQwen1.5-7B"} | Dracones/CodeQwen1.5-7B_exl2_2.5bpw | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"exl2",
"pretrained",
"conversational",
"en",
"base_model:Qwen/CodeQwen1.5-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:34:55+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| CodeQwen1.5-7B - EXL2 2.5bpw
============================
This is a 2.5bpw EXL2 quant of Qwen/CodeQwen1.5-7B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
| [
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #exl2 #pretrained #conversational #en #base_model-Qwen/CodeQwen1.5-7B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
null | adapter-transformers |
# Adapter `BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> | {"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness"]} | BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_adapter | null | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness",
"region:us"
] | null | 2024-04-16T22:35:35+00:00 | [] | [] | TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us
|
# Adapter 'BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_adapter' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
| [
"# Adapter 'BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_adapter' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] | [
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness #region-us \n",
"# Adapter 'BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_adapter' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_ablation_4iters_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_ablation_4iters_iter_1", "results": []}]} | ShenaoZ/0.001_ablation_4iters_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:36:51+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_ablation_4iters_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.001_ablation_4iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_ablation_4iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_ablation_4iters_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_ablation_4iters_iter_1", "results": []}]} | ShenaoZ/0.0_ablation_4iters_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:36:53+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.0_ablation_4iters_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.0_ablation_4iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.0_ablation_4iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
reinforcement-learning | stable-baselines3 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MLIsaac -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MLIsaac -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MLIsaac
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| {"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "622.00 +/- 137.64", "name": "mean_reward", "verified": false}]}]}]} | MLIsaac/SpaceInvadersNoFrameskip-v4 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-16T22:40:24+00:00 | [] | [] | TAGS
#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# DQN Agent playing SpaceInvadersNoFrameskip-v4
This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4
using the stable-baselines3 library
and the RL Zoo.
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: URL
SB3: URL
SB3 Contrib: URL
Install the RL Zoo (with SB3 and SB3-Contrib):
If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:
## Training (with the RL Zoo)
## Hyperparameters
# Environment Arguments
| [
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] | [
"TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-classification-v11 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:40:26+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# WizardLM-2-4x7B-MoE-exl2-6_0bpw
This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE) an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). Quantization was done using version 0.0.18 of [ExLlamaV2](https://github.com/turboderp/exllamav2).
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
For more information see the [original repository](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE). | {"license": "apache-2.0", "tags": ["MoE", "merge", "mergekit", "Mistral", "Microsoft/WizardLM-2-7B"]} | Skylaude/WizardLM-2-4x7B-MoE-exl2-6_0bpw | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"MoE",
"merge",
"mergekit",
"Mistral",
"Microsoft/WizardLM-2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-16T22:42:15+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #MoE #merge #mergekit #Mistral #Microsoft/WizardLM-2-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
|
# WizardLM-2-4x7B-MoE-exl2-6_0bpw
This is a quantized version of WizardLM-2-4x7B-MoE an experimental MoE model made with Mergekit. Quantization was done using version 0.0.18 of ExLlamaV2.
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
For more information see the original repository. | [
"# WizardLM-2-4x7B-MoE-exl2-6_0bpw\n\nThis is a quantized version of WizardLM-2-4x7B-MoE an experimental MoE model made with Mergekit. Quantization was done using version 0.0.18 of ExLlamaV2. \n\nPlease be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.\n\nFor more information see the original repository."
] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #MoE #merge #mergekit #Mistral #Microsoft/WizardLM-2-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n",
"# WizardLM-2-4x7B-MoE-exl2-6_0bpw\n\nThis is a quantized version of WizardLM-2-4x7B-MoE an experimental MoE model made with Mergekit. Quantization was done using version 0.0.18 of ExLlamaV2. \n\nPlease be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.\n\nFor more information see the original repository."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-19-layer](https://huggingface.co/Citaman/command-r-19-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-19-layer
layer_range: [0, 18]
- model: Citaman/command-r-19-layer
layer_range: [1, 19]
merge_method: slerp
base_model: Citaman/command-r-19-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-19-layer"]} | Citaman/command-r-18-layer | null | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-19-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:42:22+00:00 | [] | [] | TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-19-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-19-layer
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-19-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-19-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-19-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** yiruiz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-2-13b-bnb-4bit"} | yiruiz/llama-2-13b-code-4bit-old | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-2-13b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-04-16T22:42:44+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us
|
# Uploaded model
- Developed by: yiruiz
- License: apache-2.0
- Finetuned from model : unsloth/llama-2-13b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: yiruiz\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-2-13b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Uploaded model\n\n- Developed by: yiruiz\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-2-13b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_32768_512_43M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2629
- F1 Score: 0.5879
- Accuracy: 0.5888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6696 | 15.38 | 200 | 0.6587 | 0.5869 | 0.6136 |
| 0.5968 | 30.77 | 400 | 0.7031 | 0.6016 | 0.6030 |
| 0.5336 | 46.15 | 600 | 0.7504 | 0.5952 | 0.5924 |
| 0.4868 | 61.54 | 800 | 0.7744 | 0.6052 | 0.6045 |
| 0.4517 | 76.92 | 1000 | 0.8131 | 0.6004 | 0.5985 |
| 0.4266 | 92.31 | 1200 | 0.8355 | 0.6056 | 0.6079 |
| 0.4066 | 107.69 | 1400 | 0.8539 | 0.6037 | 0.6051 |
| 0.3879 | 123.08 | 1600 | 0.8771 | 0.6029 | 0.6024 |
| 0.3696 | 138.46 | 1800 | 0.9007 | 0.5959 | 0.5952 |
| 0.3565 | 153.85 | 2000 | 0.9409 | 0.5984 | 0.5991 |
| 0.3428 | 169.23 | 2200 | 0.9110 | 0.6030 | 0.6036 |
| 0.3315 | 184.62 | 2400 | 0.9793 | 0.5919 | 0.5894 |
| 0.3243 | 200.0 | 2600 | 0.9690 | 0.6020 | 0.6027 |
| 0.316 | 215.38 | 2800 | 1.0130 | 0.6002 | 0.5976 |
| 0.3095 | 230.77 | 3000 | 1.0228 | 0.5960 | 0.5955 |
| 0.302 | 246.15 | 3200 | 1.0107 | 0.6020 | 0.6021 |
| 0.2961 | 261.54 | 3400 | 1.0218 | 0.6044 | 0.6106 |
| 0.2918 | 276.92 | 3600 | 1.0085 | 0.6031 | 0.6042 |
| 0.284 | 292.31 | 3800 | 1.0950 | 0.6027 | 0.6024 |
| 0.2785 | 307.69 | 4000 | 1.0137 | 0.5999 | 0.6012 |
| 0.2744 | 323.08 | 4200 | 1.0452 | 0.5963 | 0.5961 |
| 0.2687 | 338.46 | 4400 | 1.0583 | 0.6066 | 0.6064 |
| 0.2629 | 353.85 | 4600 | 1.0722 | 0.6068 | 0.6079 |
| 0.258 | 369.23 | 4800 | 1.1206 | 0.6028 | 0.6048 |
| 0.2544 | 384.62 | 5000 | 1.1001 | 0.6066 | 0.6070 |
| 0.2511 | 400.0 | 5200 | 1.0975 | 0.5970 | 0.5949 |
| 0.246 | 415.38 | 5400 | 1.0744 | 0.5930 | 0.5912 |
| 0.2433 | 430.77 | 5600 | 1.1112 | 0.6032 | 0.6036 |
| 0.2405 | 446.15 | 5800 | 1.0809 | 0.5950 | 0.5930 |
| 0.2353 | 461.54 | 6000 | 1.1422 | 0.6016 | 0.6009 |
| 0.2311 | 476.92 | 6200 | 1.1493 | 0.5986 | 0.6003 |
| 0.2287 | 492.31 | 6400 | 1.1663 | 0.6003 | 0.5994 |
| 0.2259 | 507.69 | 6600 | 1.1394 | 0.6013 | 0.6024 |
| 0.2241 | 523.08 | 6800 | 1.1298 | 0.6040 | 0.6064 |
| 0.2185 | 538.46 | 7000 | 1.1475 | 0.6006 | 0.6003 |
| 0.2177 | 553.85 | 7200 | 1.1685 | 0.6074 | 0.6079 |
| 0.2165 | 569.23 | 7400 | 1.1946 | 0.6043 | 0.6048 |
| 0.2132 | 584.62 | 7600 | 1.1583 | 0.6005 | 0.5988 |
| 0.2096 | 600.0 | 7800 | 1.1970 | 0.6033 | 0.6024 |
| 0.2088 | 615.38 | 8000 | 1.2061 | 0.6017 | 0.6 |
| 0.2062 | 630.77 | 8200 | 1.1477 | 0.6007 | 0.5997 |
| 0.2042 | 646.15 | 8400 | 1.1865 | 0.6007 | 0.5997 |
| 0.2037 | 661.54 | 8600 | 1.1718 | 0.6047 | 0.6057 |
| 0.2021 | 676.92 | 8800 | 1.1903 | 0.6055 | 0.6061 |
| 0.2 | 692.31 | 9000 | 1.1901 | 0.6066 | 0.6073 |
| 0.2002 | 707.69 | 9200 | 1.1981 | 0.6047 | 0.6045 |
| 0.1987 | 723.08 | 9400 | 1.1860 | 0.6044 | 0.6045 |
| 0.1983 | 738.46 | 9600 | 1.1849 | 0.6021 | 0.6021 |
| 0.1955 | 753.85 | 9800 | 1.1994 | 0.6056 | 0.6057 |
| 0.1949 | 769.23 | 10000 | 1.1998 | 0.6057 | 0.6057 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_32768_512_43M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_32768_512_43M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-16T22:43:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_32768\_512\_43M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_32768\_512\_43M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2629
* F1 Score: 0.5879
* Accuracy: 0.5888
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_32768_512_43M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
image-classification | pytorch |
# TransNeXt
Official Model release
for ["TransNeXt: Robust Foveal Visual Perception for Vision Transformers"](https://arxiv.org/pdf/2311.17132.pdf) [CVPR 2024]
.
## Model Details
- **Code:** https://github.com/DaiShiResearch/TransNeXt
- **Paper:** [TransNeXt: Robust Foveal Visual Perception for Vision Transformers](https://arxiv.org/abs/2311.17132)
- **Author:** [Dai Shi](https://github.com/DaiShiResearch)
- **Email:** [email protected]
## Methods
#### Pixel-focused attention (Left) & aggregated attention (Right):

#### Convolutional GLU (First on the right):

## Results
#### Image Classification, Detection and Segmentation:

#### Attention Visualization:

## Model Zoo
### Image Classification
***Classification code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/ )<<<.***
**ImageNet-1K 224x224 pre-trained models:**
| Model | #Params | #FLOPs |IN-1K | IN-A | IN-C↓ |IN-R|Sketch|IN-V2|Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
| TransNeXt-Micro|12.8M|2.7G| 82.5 | 29.9 | 50.8|45.8|33.0|72.6|[model](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/resolve/main/transnext_micro_224_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-224-1k/raw/main/transnext_micro_224_1k.txt) |
| TransNeXt-Tiny |28.2M|5.7G| 84.0| 39.9| 46.5|49.6|37.6|73.8|[model](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_tiny.py)|[log](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/raw/main/transnext_tiny_224_1k.txt)|
| TransNeXt-Small |49.7M|10.3G| 84.7| 47.1| 43.9|52.5| 39.7|74.8 |[model](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_small.py)|[log](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/raw/main/transnext_small_224_1k.txt)|
| TransNeXt-Base |89.7M|18.4G| 84.8| 50.6|43.5|53.9|41.4|75.1| [model](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_base.py)|[log](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/raw/main/transnext_base_224_1k.txt)|
**ImageNet-1K 384x384 fine-tuned models:**
| Model | #Params | #FLOPs |IN-1K | IN-A |IN-R|Sketch|IN-V2| Download |Config|
|:---:|:---:|:---:|:---:| :---:|:---:|:---:| :---:|:---:|:---:|
| TransNeXt-Small |49.7M|32.1G| 86.0| 58.3|56.4|43.2|76.8| [model](https://huggingface.co/DaiShiResearch/transnext-small-384-1k-ft-1k/resolve/main/transnext_small_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_small_384_ft.py)|
| TransNeXt-Base |89.7M|56.3G| 86.2| 61.6|57.7|44.7|77.0| [model](https://huggingface.co/DaiShiResearch/transnext-base-384-1k-ft-1k/resolve/main/transnext_base_384_1k_ft_1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/finetune/transnext_base_384_ft.py)|
**ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:**
*(See Table.9 in Appendix D.6 for details)*
| Model |Token mixer| #Params | #FLOPs |IN-1K |Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|
|TransNeXt-Micro|**A-A-A-A**|13.1M|3.3G| 82.6 |[model](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/resolve/main/transnext_micro_AAAA_256_1k.pth?download=true) |[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/classification/configs/transnext_micro_AAAA_256.py)|[log](https://huggingface.co/DaiShiResearch/transnext-micro-AAAA-256-1k/blob/main/transnext_micro_AAAA_256_1k.txt) |
### Object Detection
***Object detection code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/ )<<<.***
**COCO object detection and instance segmentation results using the Mask R-CNN method:**
| Backbone | Pretrained Model| Lr Schd| box mAP | mask mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true) |1x|49.9|44.6|47.9M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/resolve/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_tiny_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-tiny-coco/raw/main/mask_rcnn_transnext_tiny_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true) |1x|51.1|45.5|69.3M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/resolve/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_small_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-small-coco/raw/main/mask_rcnn_transnext_small_fpn_1x_coco_in1k.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true) |1x|51.7|45.9|109.2M|[model](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/resolve/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/maskrcnn/configs/mask_rcnn_transnext_base_fpn_1x_coco.py)|[log](https://huggingface.co/DaiShiResearch/maskrcnn-transnext-base-coco/raw/main/mask_rcnn_transnext_base_fpn_1x_coco_in1k.log.json)|
**COCO object detection results using the DINO method:**
| Backbone | Pretrained Model| scales | epochs | box mAP | #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:| :---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|4scale | 12|55.1|47.8M|[model](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/resolve/main/dino_4scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-4scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-4scale-transnext-tiny-coco/raw/main/dino_4scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|5scale | 12|55.7|48.1M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/resolve/main/dino_5scale_transnext_tiny_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_tiny-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-tiny-coco/raw/main/dino_5scale_transnext_tiny_12e_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|5scale | 12|56.6|69.6M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/resolve/main/dino_5scale_transnext_small_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_small-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-small-coco/raw/main/dino_5scale_transnext_small_12e_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|5scale | 12|57.1|110M|[model](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/resolve/main/dino_5scale_transnext_base_12e_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/detection/dino/configs/dino-5scale_transnext_base-12e_coco.py)|[log](https://huggingface.co/DaiShiResearch/dino-5scale-transnext-base-coco/raw/main/dino_5scale_transnext_base_12e_in1k.json)|
### Semantic Segmentation
***Semantic segmentation code & weights & configs & training logs are >>>[here](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/ )<<<.***
**ADE20K semantic segmentation results using the UPerNet method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU|mIoU (ms+flip)| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|51.1|51.5/51.7|59M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/resolve/main/upernet_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_tiny_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-tiny-ade/blob/main/upernet_transnext_tiny_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|52.2|52.5/51.8|80M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/resolve/main/upernet_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_small_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-small-ade/blob/main/upernet_transnext_small_512x512_160k_ade20k_ss.log.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|53.0|53.5/53.7|121M|[model](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/resolve/main/upernet_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/upernet/configs/upernet_transnext_base_512x512_160k_ade20k_ss.py)|[log](https://huggingface.co/DaiShiResearch/upernet-transnext-base-ade/blob/main/upernet_transnext_base_512x512_160k_ade20k_ss.log.json)|
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: **interpolation** and **extrapolation** of relative position bias.
**ADE20K semantic segmentation results using the Mask2Former method:**
| Backbone | Pretrained Model| Crop Size |Lr Schd| mIoU| #Params | Download |Config| Log |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| TransNeXt-Tiny | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-tiny-224-1k/resolve/main/transnext_tiny_224_1k.pth?download=true)|512x512|160K|53.4|47.5M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/resolve/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_tiny_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-tiny-ade/raw/main/mask2former_transnext_tiny_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Small | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-small-224-1k/resolve/main/transnext_small_224_1k.pth?download=true)|512x512|160K|54.1|69.0M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/resolve/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_small_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-small-ade/raw/main/mask2former_transnext_small_512x512_160k_ade20k_in1k.json)|
| TransNeXt-Base | [ImageNet-1K](https://huggingface.co/DaiShiResearch/transnext-base-224-1k/resolve/main/transnext_base_224_1k.pth?download=true)|512x512|160K|54.7|109M|[model](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/resolve/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.pth?download=true)|[config](https://github.com/DaiShiResearch/TransNeXt/tree/main/segmentation/mask2former/configs/mask2former_transnext_base_160k_ade20k-512x512.py)|[log](https://huggingface.co/DaiShiResearch/mask2former-transnext-base-ade/raw/main/mask2former_transnext_base_512x512_160k_ade20k_in1k.json)|
## Citation
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | {"language": ["en"], "license": "apache-2.0", "library_name": "pytorch", "tags": ["vision"], "datasets": ["imagenet-1k"], "metrics": ["accuracy"], "pipeline_tag": "image-classification"} | DaiShiResearch/transnext-tiny-224-1k | null | [
"pytorch",
"vision",
"image-classification",
"en",
"dataset:imagenet-1k",
"arxiv:2311.17132",
"license:apache-2.0",
"region:us"
] | null | 2024-04-16T22:46:16+00:00 | [
"2311.17132"
] | [
"en"
] | TAGS
#pytorch #vision #image-classification #en #dataset-imagenet-1k #arxiv-2311.17132 #license-apache-2.0 #region-us
| TransNeXt
=========
Official Model release
for "TransNeXt: Robust Foveal Visual Perception for Vision Transformers" [CVPR 2024]
.
Model Details
-------------
* Code: URL
* Paper: TransNeXt: Robust Foveal Visual Perception for Vision Transformers
* Author: Dai Shi
* Email: daishiresearch@URL
Methods
-------
#### Pixel-focused attention (Left) & aggregated attention (Right):
!pixel-focused\_attention
#### Convolutional GLU (First on the right):
!Convolutional GLU
Results
-------
#### Image Classification, Detection and Segmentation:
!experiment\_figure
#### Attention Visualization:
!foveal\_peripheral\_vision
Model Zoo
---------
### Image Classification
*Classification code & weights & configs & training logs are >>>here<<<.*
ImageNet-1K 224x224 pre-trained models:
ImageNet-1K 384x384 fine-tuned models:
ImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:
*(See Table.9 in Appendix D.6 for details)*
### Object Detection
*Object detection code & weights & configs & training logs are >>>here<<<.*
COCO object detection and instance segmentation results using the Mask R-CNN method:
COCO object detection results using the DINO method:
### Semantic Segmentation
*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*
ADE20K semantic segmentation results using the UPerNet method:
* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.
ADE20K semantic segmentation results using the Mask2Former method:
If you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this
project.
```
@misc{shi2023transnext,
author = {Dai Shi},
title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},
year = {2023},
eprint = {arXiv:2311.17132},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
| [
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] | [
"TAGS\n#pytorch #vision #image-classification #en #dataset-imagenet-1k #arxiv-2311.17132 #license-apache-2.0 #region-us \n",
"#### Pixel-focused attention (Left) & aggregated attention (Right):\n\n\n!pixel-focused\\_attention",
"#### Convolutional GLU (First on the right):\n\n\n!Convolutional GLU\n\n\nResults\n-------",
"#### Image Classification, Detection and Segmentation:\n\n\n!experiment\\_figure",
"#### Attention Visualization:\n\n\n!foveal\\_peripheral\\_vision\n\n\nModel Zoo\n---------",
"### Image Classification\n\n\n*Classification code & weights & configs & training logs are >>>here<<<.*\n\n\nImageNet-1K 224x224 pre-trained models:\n\n\n\nImageNet-1K 384x384 fine-tuned models:\n\n\n\nImageNet-1K 256x256 pre-trained model fully utilizing aggregated attention at all stages:\n\n\n*(See Table.9 in Appendix D.6 for details)*",
"### Object Detection\n\n\n*Object detection code & weights & configs & training logs are >>>here<<<.*\n\n\nCOCO object detection and instance segmentation results using the Mask R-CNN method:\n\n\n\nCOCO object detection results using the DINO method:",
"### Semantic Segmentation\n\n\n*Semantic segmentation code & weights & configs & training logs are >>>here<<<.*\n\n\nADE20K semantic segmentation results using the UPerNet method:\n\n\n\n* In the context of multi-scale evaluation, TransNeXt reports test results under two distinct scenarios: interpolation and extrapolation of relative position bias.\n\n\nADE20K semantic segmentation results using the Mask2Former method:\n\n\n\nIf you find our work helpful, please consider citing the following bibtex. We would greatly appreciate a star for this\nproject.\n\n\n\n```\n@misc{shi2023transnext,\n author = {Dai Shi},\n title = {TransNeXt: Robust Foveal Visual Perception for Vision Transformers},\n year = {2023},\n eprint = {arXiv:2311.17132},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n\n```"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | nninjun/gpt2-xl-lora-stereoset-A-B-v1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:47:50+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/crimsonjoo/Neversleep-11B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ3_XS.gguf) | IQ3_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ3_M.gguf) | IQ3_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.IQ4_XS.gguf) | IQ4_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q4_K_S.gguf) | Q4_K_S | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q5_K_S.gguf) | Q5_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q5_K_M.gguf) | Q5_K_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q6_K.gguf) | Q6_K | 9.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Neversleep-11B-v0.1-GGUF/resolve/main/Neversleep-11B-v0.1.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["generated_from_trainer"], "base_model": "crimsonjoo/Neversleep-11B-v0.1", "quantized_by": "mradermacher"} | mradermacher/Neversleep-11B-v0.1-GGUF | null | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:crimsonjoo/Neversleep-11B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:47:53+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #generated_from_trainer #en #base_model-crimsonjoo/Neversleep-11B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #generated_from_trainer #en #base_model-crimsonjoo/Neversleep-11B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-lora
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6841
- Accuracy: 0.8048
- Precision: 0.7955
- Recall: 0.8048
- Precision Macro: 0.6332
- Recall Macro: 0.6316
- Macro Fpr: 0.0177
- Weighted Fpr: 0.0170
- Weighted Specificity: 0.9753
- Macro Specificity: 0.9853
- Weighted Sensitivity: 0.8048
- Macro Sensitivity: 0.6316
- F1 Micro: 0.8048
- F1 Macro: 0.6233
- F1 Weighted: 0.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| No log | 1.0 | 160 | 1.2986 | 0.6421 | 0.5563 | 0.6421 | 0.2826 | 0.3627 | 0.0384 | 0.0383 | 0.9531 | 0.9730 | 0.6421 | 0.3627 | 0.6421 | 0.3114 | 0.5878 |
| No log | 2.0 | 321 | 0.8962 | 0.7273 | 0.6748 | 0.7273 | 0.3629 | 0.4471 | 0.0265 | 0.0261 | 0.9685 | 0.9797 | 0.7273 | 0.4471 | 0.7273 | 0.3889 | 0.6926 |
| No log | 3.0 | 482 | 0.7814 | 0.7413 | 0.7104 | 0.7413 | 0.3985 | 0.4561 | 0.0245 | 0.0243 | 0.9703 | 0.9808 | 0.7413 | 0.4561 | 0.7413 | 0.4041 | 0.7109 |
| 1.2548 | 4.0 | 643 | 0.7648 | 0.7382 | 0.7158 | 0.7382 | 0.4273 | 0.4496 | 0.0254 | 0.0247 | 0.9662 | 0.9803 | 0.7382 | 0.4496 | 0.7382 | 0.4122 | 0.7112 |
| 1.2548 | 5.0 | 803 | 0.7329 | 0.7452 | 0.7105 | 0.7452 | 0.4162 | 0.4569 | 0.0248 | 0.0238 | 0.9668 | 0.9808 | 0.7452 | 0.4569 | 0.7452 | 0.4120 | 0.7133 |
| 1.2548 | 6.0 | 964 | 0.7430 | 0.7568 | 0.7547 | 0.7568 | 0.4627 | 0.4868 | 0.0229 | 0.0224 | 0.9710 | 0.9819 | 0.7568 | 0.4868 | 0.7568 | 0.4504 | 0.7424 |
| 0.6432 | 7.0 | 1125 | 0.7300 | 0.7723 | 0.7524 | 0.7723 | 0.5180 | 0.5411 | 0.0213 | 0.0206 | 0.9724 | 0.9830 | 0.7723 | 0.5411 | 0.7723 | 0.5175 | 0.7578 |
| 0.6432 | 8.0 | 1286 | 0.7212 | 0.7699 | 0.7514 | 0.7699 | 0.5096 | 0.5397 | 0.0216 | 0.0209 | 0.9727 | 0.9828 | 0.7699 | 0.5397 | 0.7699 | 0.5123 | 0.7556 |
| 0.6432 | 9.0 | 1446 | 0.6910 | 0.7839 | 0.7634 | 0.7839 | 0.5217 | 0.5566 | 0.0200 | 0.0193 | 0.9728 | 0.9838 | 0.7839 | 0.5566 | 0.7839 | 0.5280 | 0.7690 |
| 0.4841 | 10.0 | 1607 | 0.7122 | 0.7878 | 0.7732 | 0.7878 | 0.5355 | 0.5777 | 0.0195 | 0.0189 | 0.9748 | 0.9842 | 0.7878 | 0.5777 | 0.7878 | 0.5495 | 0.7776 |
| 0.4841 | 11.0 | 1768 | 0.6813 | 0.7916 | 0.7782 | 0.7916 | 0.5712 | 0.5765 | 0.0191 | 0.0185 | 0.9744 | 0.9844 | 0.7916 | 0.5765 | 0.7916 | 0.5563 | 0.7805 |
| 0.4841 | 12.0 | 1929 | 0.6845 | 0.7978 | 0.7922 | 0.7978 | 0.6111 | 0.6226 | 0.0184 | 0.0178 | 0.9759 | 0.9849 | 0.7978 | 0.6226 | 0.7978 | 0.6092 | 0.7927 |
| 0.3838 | 13.0 | 2089 | 0.6929 | 0.7986 | 0.7947 | 0.7986 | 0.6347 | 0.6038 | 0.0184 | 0.0177 | 0.9743 | 0.9849 | 0.7986 | 0.6038 | 0.7986 | 0.5954 | 0.7903 |
| 0.3838 | 14.0 | 2250 | 0.6929 | 0.8017 | 0.7960 | 0.8017 | 0.6369 | 0.6270 | 0.0180 | 0.0174 | 0.9754 | 0.9851 | 0.8017 | 0.6270 | 0.8017 | 0.6174 | 0.7952 |
| 0.3838 | 14.93 | 2400 | 0.6841 | 0.8048 | 0.7955 | 0.8048 | 0.6332 | 0.6316 | 0.0177 | 0.0170 | 0.9753 | 0.9853 | 0.8048 | 0.6316 | 0.8048 | 0.6233 | 0.7978 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
| {"license": "cc-by-sa-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall"], "base_model": "nlpaueb/legal-bert-base-uncased", "model-index": [{"name": "legal-bert-lora", "results": []}]} | xshubhamx/legal-bert-lora | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-04-16T22:48:21+00:00 | [] | [] | TAGS
#tensorboard #safetensors #generated_from_trainer #base_model-nlpaueb/legal-bert-base-uncased #license-cc-by-sa-4.0 #region-us
| legal-bert-lora
===============
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6841
* Accuracy: 0.8048
* Precision: 0.7955
* Recall: 0.8048
* Precision Macro: 0.6332
* Recall Macro: 0.6316
* Macro Fpr: 0.0177
* Weighted Fpr: 0.0170
* Weighted Specificity: 0.9753
* Macro Specificity: 0.9853
* Weighted Sensitivity: 0.8048
* Macro Sensitivity: 0.6316
* F1 Micro: 0.8048
* F1 Macro: 0.6233
* F1 Weighted: 0.7978
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] | [
"TAGS\n#tensorboard #safetensors #generated_from_trainer #base_model-nlpaueb/legal-bert-base-uncased #license-cc-by-sa-4.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
text-generation | transformers | ## Model Card
### Model Details
- **Model Name**: llama-fine-tune
- **Language**: Spanish
- **Training Type**: Fine-tuning
- **Base Model**: [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf)
- **Model Description**: Un modelo de Llama de 2.7 mil millones de parámetros ajustado (fine-tuned) en un conjunto de datos de preguntas y respuestas sobre consejos de carrera profesional.
### Intended Use
- **Task(s)**: Generación de texto, respuesta a preguntas sobre consejos de carrera
- **Intended Use**: Proporcionar respuestas detalladas a preguntas relacionadas con consejos de carrera profesional.
- **Limitations**: El modelo está capacitado en un conjunto de datos específico de consejos de carrera, por lo que su conocimiento puede estar limitado a ese dominio. No se recomienda para tareas generales.
### Training Data
- **Data Source**: Conjunto de datos sintético generado con GPT-3 a partir de un prompt de preguntas y respuestas sobre consejos de carrera.
- **Training Examples**: 100 ejemplos en formato `<s>[INST] <pregunta> [/INST] <respuesta> </s>`
### Training Procedure
- **Fine-tuning Approach**: Entrenamiento con PEFT (Parameter-Efficient Fine-Tuning) utilizando LoRA (Low-Rank Adaptation) con un rango de 64.
- **Hiperparámetros**:
- Épocas: 1
- Tamaño de lote: 4
- Tasa de aprendizaje: 2e-4
- Dropout LoRA: 0.1
- Optimizador: Paged AdamW 32-bit
- **Hardware**: GPU
### Evaluation
- **Evaluation Set**: 20% de los datos se reservaron para pruebas
- **Ejemplo de entrada**: `<s>[INST] ¿Cómo puedo mejorar mis habilidades de entrevista laboral? [/INST]`
- **Ejemplo de salida**: `Aquí hay algunos consejos para mejorar tus habilidades de entrevista:
1. Prepárate estudiando la empresa y el puesto. Investiga la misión, valores y cultura corporativa. Identifica cómo tus habilidades se alinean con los requisitos del trabajo.
2. Practica respondiendo preguntas comunes de entrevista en voz alta. Grábate y revisa tu lenguaje corporal, tono de voz y respuestas. Pide retroalimentación a amigos.
3. Elabora buenas preguntas que muestren tu interés genuino en el puesto y la empresa. Evita preguntas sobre salario y beneficios al inicio.
4. Vístete apropiadamente con ropa profesional y cómoda. Tu aspecto debe ser pulcro.
5. Llega temprano y sé amable con todos. La primera impresión es clave.
6. Durante la entrevista, mantén contacto visual, habla con claridad y confianza. Sé honesto sobre tus fortalezas y debilidades.
7. Al final, cierra enfatizando tu interés y agradeciendo la oportunidad. Envía un correo o nota de agradecimiento después.
La preparación, práctica y una actitud positiva pueden ayudarte a destacar en las entrevistas laborales. </s>`
### Ethics
Este modelo solo debe usarse de acuerdo con los principios éticos de Anthropic, incluyendo ser beneficioso para la humanidad y respetar los derechos humanos. No debe ser utilizado para difundir desinformación, incitación al odio u otros fines dañinos.
--- | {"language": ["es"], "license": "apache-2.0"} | CamiloVega/Llama-Jobs-Tips | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-16T22:48:46+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #llama #text-generation #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## Model Card
### Model Details
- Model Name: llama-fine-tune
- Language: Spanish
- Training Type: Fine-tuning
- Base Model: NousResearch/Llama-2-7b-chat-hf
- Model Description: Un modelo de Llama de 2.7 mil millones de parámetros ajustado (fine-tuned) en un conjunto de datos de preguntas y respuestas sobre consejos de carrera profesional.
### Intended Use
- Task(s): Generación de texto, respuesta a preguntas sobre consejos de carrera
- Intended Use: Proporcionar respuestas detalladas a preguntas relacionadas con consejos de carrera profesional.
- Limitations: El modelo está capacitado en un conjunto de datos específico de consejos de carrera, por lo que su conocimiento puede estar limitado a ese dominio. No se recomienda para tareas generales.
### Training Data
- Data Source: Conjunto de datos sintético generado con GPT-3 a partir de un prompt de preguntas y respuestas sobre consejos de carrera.
- Training Examples: 100 ejemplos en formato '<s>[INST] <pregunta> [/INST] <respuesta> </s>'
### Training Procedure
- Fine-tuning Approach: Entrenamiento con PEFT (Parameter-Efficient Fine-Tuning) utilizando LoRA (Low-Rank Adaptation) con un rango de 64.
- Hiperparámetros:
- Épocas: 1
- Tamaño de lote: 4
- Tasa de aprendizaje: 2e-4
- Dropout LoRA: 0.1
- Optimizador: Paged AdamW 32-bit
- Hardware: GPU
### Evaluation
- Evaluation Set: 20% de los datos se reservaron para pruebas
- Ejemplo de entrada: '<s>[INST] ¿Cómo puedo mejorar mis habilidades de entrevista laboral? [/INST]'
- Ejemplo de salida: 'Aquí hay algunos consejos para mejorar tus habilidades de entrevista:
1. Prepárate estudiando la empresa y el puesto. Investiga la misión, valores y cultura corporativa. Identifica cómo tus habilidades se alinean con los requisitos del trabajo.
2. Practica respondiendo preguntas comunes de entrevista en voz alta. Grábate y revisa tu lenguaje corporal, tono de voz y respuestas. Pide retroalimentación a amigos.
3. Elabora buenas preguntas que muestren tu interés genuino en el puesto y la empresa. Evita preguntas sobre salario y beneficios al inicio.
4. Vístete apropiadamente con ropa profesional y cómoda. Tu aspecto debe ser pulcro.
5. Llega temprano y sé amable con todos. La primera impresión es clave.
6. Durante la entrevista, mantén contacto visual, habla con claridad y confianza. Sé honesto sobre tus fortalezas y debilidades.
7. Al final, cierra enfatizando tu interés y agradeciendo la oportunidad. Envía un correo o nota de agradecimiento después.
La preparación, práctica y una actitud positiva pueden ayudarte a destacar en las entrevistas laborales. </s>'
### Ethics
Este modelo solo debe usarse de acuerdo con los principios éticos de Anthropic, incluyendo ser beneficioso para la humanidad y respetar los derechos humanos. No debe ser utilizado para difundir desinformación, incitación al odio u otros fines dañinos.
--- | [
"## Model Card",
"### Model Details\n- Model Name: llama-fine-tune\n- Language: Spanish\n- Training Type: Fine-tuning\n- Base Model: NousResearch/Llama-2-7b-chat-hf\n- Model Description: Un modelo de Llama de 2.7 mil millones de parámetros ajustado (fine-tuned) en un conjunto de datos de preguntas y respuestas sobre consejos de carrera profesional.",
"### Intended Use\n- Task(s): Generación de texto, respuesta a preguntas sobre consejos de carrera\n- Intended Use: Proporcionar respuestas detalladas a preguntas relacionadas con consejos de carrera profesional.\n- Limitations: El modelo está capacitado en un conjunto de datos específico de consejos de carrera, por lo que su conocimiento puede estar limitado a ese dominio. No se recomienda para tareas generales.",
"### Training Data\n- Data Source: Conjunto de datos sintético generado con GPT-3 a partir de un prompt de preguntas y respuestas sobre consejos de carrera.\n- Training Examples: 100 ejemplos en formato '<s>[INST] <pregunta> [/INST] <respuesta> </s>'",
"### Training Procedure\n- Fine-tuning Approach: Entrenamiento con PEFT (Parameter-Efficient Fine-Tuning) utilizando LoRA (Low-Rank Adaptation) con un rango de 64.\n- Hiperparámetros:\n - Épocas: 1\n - Tamaño de lote: 4\n - Tasa de aprendizaje: 2e-4\n - Dropout LoRA: 0.1\n - Optimizador: Paged AdamW 32-bit\n- Hardware: GPU",
"### Evaluation\n- Evaluation Set: 20% de los datos se reservaron para pruebas\n- Ejemplo de entrada: '<s>[INST] ¿Cómo puedo mejorar mis habilidades de entrevista laboral? [/INST]'\n- Ejemplo de salida: 'Aquí hay algunos consejos para mejorar tus habilidades de entrevista:\n\n1. Prepárate estudiando la empresa y el puesto. Investiga la misión, valores y cultura corporativa. Identifica cómo tus habilidades se alinean con los requisitos del trabajo.\n\n2. Practica respondiendo preguntas comunes de entrevista en voz alta. Grábate y revisa tu lenguaje corporal, tono de voz y respuestas. Pide retroalimentación a amigos.\n\n3. Elabora buenas preguntas que muestren tu interés genuino en el puesto y la empresa. Evita preguntas sobre salario y beneficios al inicio.\n\n4. Vístete apropiadamente con ropa profesional y cómoda. Tu aspecto debe ser pulcro.\n\n5. Llega temprano y sé amable con todos. La primera impresión es clave.\n\n6. Durante la entrevista, mantén contacto visual, habla con claridad y confianza. Sé honesto sobre tus fortalezas y debilidades.\n\n7. Al final, cierra enfatizando tu interés y agradeciendo la oportunidad. Envía un correo o nota de agradecimiento después.\n\nLa preparación, práctica y una actitud positiva pueden ayudarte a destacar en las entrevistas laborales. </s>'",
"### Ethics\n\nEste modelo solo debe usarse de acuerdo con los principios éticos de Anthropic, incluyendo ser beneficioso para la humanidad y respetar los derechos humanos. No debe ser utilizado para difundir desinformación, incitación al odio u otros fines dañinos.\n---"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Model Card",
"### Model Details\n- Model Name: llama-fine-tune\n- Language: Spanish\n- Training Type: Fine-tuning\n- Base Model: NousResearch/Llama-2-7b-chat-hf\n- Model Description: Un modelo de Llama de 2.7 mil millones de parámetros ajustado (fine-tuned) en un conjunto de datos de preguntas y respuestas sobre consejos de carrera profesional.",
"### Intended Use\n- Task(s): Generación de texto, respuesta a preguntas sobre consejos de carrera\n- Intended Use: Proporcionar respuestas detalladas a preguntas relacionadas con consejos de carrera profesional.\n- Limitations: El modelo está capacitado en un conjunto de datos específico de consejos de carrera, por lo que su conocimiento puede estar limitado a ese dominio. No se recomienda para tareas generales.",
"### Training Data\n- Data Source: Conjunto de datos sintético generado con GPT-3 a partir de un prompt de preguntas y respuestas sobre consejos de carrera.\n- Training Examples: 100 ejemplos en formato '<s>[INST] <pregunta> [/INST] <respuesta> </s>'",
"### Training Procedure\n- Fine-tuning Approach: Entrenamiento con PEFT (Parameter-Efficient Fine-Tuning) utilizando LoRA (Low-Rank Adaptation) con un rango de 64.\n- Hiperparámetros:\n - Épocas: 1\n - Tamaño de lote: 4\n - Tasa de aprendizaje: 2e-4\n - Dropout LoRA: 0.1\n - Optimizador: Paged AdamW 32-bit\n- Hardware: GPU",
"### Evaluation\n- Evaluation Set: 20% de los datos se reservaron para pruebas\n- Ejemplo de entrada: '<s>[INST] ¿Cómo puedo mejorar mis habilidades de entrevista laboral? [/INST]'\n- Ejemplo de salida: 'Aquí hay algunos consejos para mejorar tus habilidades de entrevista:\n\n1. Prepárate estudiando la empresa y el puesto. Investiga la misión, valores y cultura corporativa. Identifica cómo tus habilidades se alinean con los requisitos del trabajo.\n\n2. Practica respondiendo preguntas comunes de entrevista en voz alta. Grábate y revisa tu lenguaje corporal, tono de voz y respuestas. Pide retroalimentación a amigos.\n\n3. Elabora buenas preguntas que muestren tu interés genuino en el puesto y la empresa. Evita preguntas sobre salario y beneficios al inicio.\n\n4. Vístete apropiadamente con ropa profesional y cómoda. Tu aspecto debe ser pulcro.\n\n5. Llega temprano y sé amable con todos. La primera impresión es clave.\n\n6. Durante la entrevista, mantén contacto visual, habla con claridad y confianza. Sé honesto sobre tus fortalezas y debilidades.\n\n7. Al final, cierra enfatizando tu interés y agradeciendo la oportunidad. Envía un correo o nota de agradecimiento después.\n\nLa preparación, práctica y una actitud positiva pueden ayudarte a destacar en las entrevistas laborales. </s>'",
"### Ethics\n\nEste modelo solo debe usarse de acuerdo con los principios éticos de Anthropic, incluyendo ser beneficioso para la humanidad y respetar los derechos humanos. No debe ser utilizado para difundir desinformación, incitación al odio u otros fines dañinos.\n---"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tapt_seq_bn_amazon_helpfulness_classification_model_v2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3540
- Accuracy: 0.864
- F1 Macro: 0.6950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3384 | 1.0 | 1563 | 0.3308 | 0.8586 | 0.6739 |
| 0.3245 | 2.0 | 3126 | 0.3256 | 0.8652 | 0.6719 |
| 0.3258 | 3.0 | 4689 | 0.3408 | 0.8674 | 0.6464 |
| 0.3309 | 4.0 | 6252 | 0.3150 | 0.8678 | 0.6527 |
| 0.292 | 5.0 | 7815 | 0.3226 | 0.8692 | 0.6787 |
| 0.2756 | 6.0 | 9378 | 0.3384 | 0.8688 | 0.6498 |
| 0.2584 | 7.0 | 10941 | 0.3489 | 0.8654 | 0.6946 |
| 0.2758 | 8.0 | 12504 | 0.3540 | 0.864 | 0.6950 |
| 0.2476 | 9.0 | 14067 | 0.3540 | 0.8668 | 0.6688 |
| 0.2303 | 10.0 | 15630 | 0.3686 | 0.8662 | 0.6542 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "tapt_seq_bn_amazon_helpfulness_classification_model_v2", "results": []}]} | BigTMiami/tapt_seq_bn_amazon_helpfulness_classification_model_v2 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:roberta-base",
"license:mit",
"region:us"
] | null | 2024-04-16T22:50:51+00:00 | [] | [] | TAGS
#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us
| tapt\_seq\_bn\_amazon\_helpfulness\_classification\_model\_v2
=============================================================
This model is a fine-tuned version of roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3540
* Accuracy: 0.864
* F1 Macro: 0.6950
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.06
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null | # TS-Corpus BPE Tokenizer (32k, Cased)
## Overview
This repository hosts a Byte Pair Encoding (BPE) tokenizer with a vocabulary size of 32,000, trained uncased using several datasets from the TS Corpus website. The BPE method is particularly effective for languages like Turkish, providing a balance between word-level and character-level tokenization.
## Dataset Sources
The tokenizer was trained on a variety of text sources from TS Corpus, ensuring a broad linguistic coverage. These sources include:
- [TS Corpus V2](https://tscorpus.com/corpora/ts-corpus-v2/)
- [TS Wikipedia Corpus](https://tscorpus.com/corpora/ts-wikipedia-corpus/)
- [TS Abstract Corpus](https://tscorpus.com/corpora/ts-abstract-corpus/)
- [TS Idioms and Proverbs Corpus](https://tscorpus.com/corpora/ts-idioms-and-proverbs-corpus/)
- [Syllable Corpus](https://tscorpus.com/corpora/syllable-corpus/)
- [Turkish Constitution Corpus](https://tscorpus.com/corpora/turkish-constitution-corpus/)
The inclusion of idiomatic expressions, proverbs, and legal terminology provides a comprehensive toolkit for processing Turkish text across different domains.
## Tokenizer Model
Utilizing the Byte Pair Encoding (BPE) method, this tokenizer excels in efficiently managing subword units without the need for an extensive vocabulary. BPE is especially suitable for handling the agglutinative nature of Turkish, where words can have multiple suffixes.
## Usage
To use this tokenizer in your projects, load it with the Hugging Face `transformers` library:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tahaenesaslanturk/ts-corpus-bpe-32k-cased")
``` | {"license": "mit"} | tahaenesaslanturk/ts-corpus-bpe-32k-cased | null | [
"license:mit",
"region:us"
] | null | 2024-04-16T22:54:16+00:00 | [] | [] | TAGS
#license-mit #region-us
| # TS-Corpus BPE Tokenizer (32k, Cased)
## Overview
This repository hosts a Byte Pair Encoding (BPE) tokenizer with a vocabulary size of 32,000, trained uncased using several datasets from the TS Corpus website. The BPE method is particularly effective for languages like Turkish, providing a balance between word-level and character-level tokenization.
## Dataset Sources
The tokenizer was trained on a variety of text sources from TS Corpus, ensuring a broad linguistic coverage. These sources include:
- TS Corpus V2
- TS Wikipedia Corpus
- TS Abstract Corpus
- TS Idioms and Proverbs Corpus
- Syllable Corpus
- Turkish Constitution Corpus
The inclusion of idiomatic expressions, proverbs, and legal terminology provides a comprehensive toolkit for processing Turkish text across different domains.
## Tokenizer Model
Utilizing the Byte Pair Encoding (BPE) method, this tokenizer excels in efficiently managing subword units without the need for an extensive vocabulary. BPE is especially suitable for handling the agglutinative nature of Turkish, where words can have multiple suffixes.
## Usage
To use this tokenizer in your projects, load it with the Hugging Face 'transformers' library:
| [
"# TS-Corpus BPE Tokenizer (32k, Cased)",
"## Overview\nThis repository hosts a Byte Pair Encoding (BPE) tokenizer with a vocabulary size of 32,000, trained uncased using several datasets from the TS Corpus website. The BPE method is particularly effective for languages like Turkish, providing a balance between word-level and character-level tokenization.",
"## Dataset Sources\nThe tokenizer was trained on a variety of text sources from TS Corpus, ensuring a broad linguistic coverage. These sources include:\n- TS Corpus V2\n- TS Wikipedia Corpus\n- TS Abstract Corpus\n- TS Idioms and Proverbs Corpus\n- Syllable Corpus\n- Turkish Constitution Corpus\n\nThe inclusion of idiomatic expressions, proverbs, and legal terminology provides a comprehensive toolkit for processing Turkish text across different domains.",
"## Tokenizer Model\nUtilizing the Byte Pair Encoding (BPE) method, this tokenizer excels in efficiently managing subword units without the need for an extensive vocabulary. BPE is especially suitable for handling the agglutinative nature of Turkish, where words can have multiple suffixes.",
"## Usage\nTo use this tokenizer in your projects, load it with the Hugging Face 'transformers' library:"
] | [
"TAGS\n#license-mit #region-us \n",
"# TS-Corpus BPE Tokenizer (32k, Cased)",
"## Overview\nThis repository hosts a Byte Pair Encoding (BPE) tokenizer with a vocabulary size of 32,000, trained uncased using several datasets from the TS Corpus website. The BPE method is particularly effective for languages like Turkish, providing a balance between word-level and character-level tokenization.",
"## Dataset Sources\nThe tokenizer was trained on a variety of text sources from TS Corpus, ensuring a broad linguistic coverage. These sources include:\n- TS Corpus V2\n- TS Wikipedia Corpus\n- TS Abstract Corpus\n- TS Idioms and Proverbs Corpus\n- Syllable Corpus\n- Turkish Constitution Corpus\n\nThe inclusion of idiomatic expressions, proverbs, and legal terminology provides a comprehensive toolkit for processing Turkish text across different domains.",
"## Tokenizer Model\nUtilizing the Byte Pair Encoding (BPE) method, this tokenizer excels in efficiently managing subword units without the need for an extensive vocabulary. BPE is especially suitable for handling the agglutinative nature of Turkish, where words can have multiple suffixes.",
"## Usage\nTo use this tokenizer in your projects, load it with the Hugging Face 'transformers' library:"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["trl", "sft"]} | lilyray/falcon_7b_emo_motiv_sileod | null | [
"transformers",
"safetensors",
"trl",
"sft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-16T22:54:47+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.