pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | elliotthwangmsa/KimLanpure-phi-3-zh | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T02:47:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chat-dpo-zf-042701-moren | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T02:47:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.923
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3117 | 0.9065 | 0.9055 |
| No log | 2.0 | 500 | 0.2156 | 0.923 | 0.9226 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.2
- Datasets 2.12.0
- Tokenizers 0.13.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9225647553629688, "name": "F1"}]}]}]} | VuaCoBac/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T02:48:38+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2156
* Accuracy: 0.923
* F1: 0.9226
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.32.1
* Pytorch 2.2.2
* Datasets 2.12.0
* Tokenizers 0.13.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.2\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-410m", "model-index": [{"name": "robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10", "results": []}]} | AlignmentResearch/robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:50:34+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-410m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10
This model is a fine-tuned version of EleutherAI/pythia-410m on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt_neox #text-classification #generated_from_trainer #base_model-EleutherAI/pythia-410m #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10\n\nThis model is a fine-tuned version of EleutherAI/pythia-410m on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 64\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/zephyr-7b-beta", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T02:50:50+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/zephyr-7b-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1
This model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/zephyr-7b-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs256_nodpo_only4w_zephyr_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/zephyr-7b-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | null | ---
license: apache-2.0
---
# Mobius RWKV r5 chat 12B 8k
Mobius is a RWKV v5.2 arch chat model, benifit from [Matrix-Valued States and Dynamic Recurrence](https://arxiv.org/abs/2404.05892)
## Introduction
Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data.
In comparison with the previous released Mobius, the improvements include:
* Only 24G Vram to run this model locally with fp16;
* Significant performance improvement;
* Multilingual support ;
* Stable support of 128K context length.
* Base model [Mobius-mega-12B-128k-base](https://huggingface.co/TimeMobius/Moibus-mega-12B-128k-base)
## Usage
We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability.
Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8
## More details
Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community
* 10~100 trainning/inference cost reduce;
* state based,selected memory, which mean good at grok;
* community support.
## requirements
24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.
* [RWKV Runner](https://github.com/josStorer/RWKV-Runner)
* [Ai00 server](https://github.com/cgisky1980/ai00_rwkv_server)
## future plan
If you need a HF version let us know
[Mobius-Chat-12B-128k](https://huggingface.co/TimeMobius/Mobius-Chat-12B-128k) | {"license": "apache-2.0"} | TimeMobius/Mobius-RWKV-r5-chat-12B-8k | null | [
"arxiv:2404.05892",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T02:54:25+00:00 | [
"2404.05892"
] | [] | TAGS
#arxiv-2404.05892 #license-apache-2.0 #region-us
| ---
license: apache-2.0
---
# Mobius RWKV r5 chat 12B 8k
Mobius is a RWKV v5.2 arch chat model, benifit from Matrix-Valued States and Dynamic Recurrence
## Introduction
Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data.
In comparison with the previous released Mobius, the improvements include:
* Only 24G Vram to run this model locally with fp16;
* Significant performance improvement;
* Multilingual support ;
* Stable support of 128K context length.
* Base model Mobius-mega-12B-128k-base
## Usage
We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability.
Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8
## More details
Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community
* 10~100 trainning/inference cost reduce;
* state based,selected memory, which mean good at grok;
* community support.
## requirements
24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.
* RWKV Runner
* Ai00 server
## future plan
If you need a HF version let us know
Mobius-Chat-12B-128k | [
"# Mobius RWKV r5 chat 12B 8k\nMobius is a RWKV v5.2 arch chat model, benifit from Matrix-Valued States and Dynamic Recurrence",
"## Introduction\n\nMobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data.\nIn comparison with the previous released Mobius, the improvements include:\n\n* Only 24G Vram to run this model locally with fp16;\n* Significant performance improvement;\n* Multilingual support ;\n* Stable support of 128K context length.\n* Base model Mobius-mega-12B-128k-base",
"## Usage\nWe encourage you use few shots to use this model, Desipte Directly use User: xxxx\\n\\nAssistant: xxx\\n\\n is really good too, Can boost all potential ability. \n\nRecommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8",
"## More details\nMobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community\n* 10~100 trainning/inference cost reduce;\n* state based,selected memory, which mean good at grok;\n* community support.",
"## requirements\n24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.\n\n* RWKV Runner\n* Ai00 server",
"## future plan\nIf you need a HF version let us know\n\nMobius-Chat-12B-128k"
] | [
"TAGS\n#arxiv-2404.05892 #license-apache-2.0 #region-us \n",
"# Mobius RWKV r5 chat 12B 8k\nMobius is a RWKV v5.2 arch chat model, benifit from Matrix-Valued States and Dynamic Recurrence",
"## Introduction\n\nMobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data.\nIn comparison with the previous released Mobius, the improvements include:\n\n* Only 24G Vram to run this model locally with fp16;\n* Significant performance improvement;\n* Multilingual support ;\n* Stable support of 128K context length.\n* Base model Mobius-mega-12B-128k-base",
"## Usage\nWe encourage you use few shots to use this model, Desipte Directly use User: xxxx\\n\\nAssistant: xxx\\n\\n is really good too, Can boost all potential ability. \n\nRecommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8",
"## More details\nMobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community\n* 10~100 trainning/inference cost reduce;\n* state based,selected memory, which mean good at grok;\n* community support.",
"## requirements\n24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.\n\n* RWKV Runner\n* Ai00 server",
"## future plan\nIf you need a HF version let us know\n\nMobius-Chat-12B-128k"
] |
text-generation | transformers | # Description
4-bit AWQ-quantized version of [stylellm/ShuiHuZhuan-6b](https://huggingface.co/stylellm/ShuiHuZhuan-6b) | {"license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE"} | stylellm/ShuiHuZhuan-6b-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T02:55:51+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| # Description
4-bit AWQ-quantized version of stylellm/ShuiHuZhuan-6b | [
"# Description\n4-bit AWQ-quantized version of stylellm/ShuiHuZhuan-6b"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Description\n4-bit AWQ-quantized version of stylellm/ShuiHuZhuan-6b"
] |
text-generation | transformers | # Description
4-bit AWQ-quantized version of [stylellm/XiYouJi-6b](https://huggingface.co/stylellm/XiYouJi-6b) | {"license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE"} | stylellm/XiYouJi-6b-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T02:57:06+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| # Description
4-bit AWQ-quantized version of stylellm/XiYouJi-6b | [
"# Description\n4-bit AWQ-quantized version of stylellm/XiYouJi-6b"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Description\n4-bit AWQ-quantized version of stylellm/XiYouJi-6b"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speech_ocean_wav2vec_mdd
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3663
- Wer: 0.0863
- Cer: 0.0692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| 45.149 | 0.9873 | 39 | 45.0584 | 1.0258 | 0.7932 |
| 40.7325 | 2.0 | 79 | 32.0660 | 1.0 | 1.0 |
| 14.8164 | 2.9873 | 118 | 8.1694 | 1.0 | 1.0 |
| 5.6535 | 4.0 | 158 | 4.5922 | 1.0 | 1.0 |
| 3.9508 | 4.9873 | 197 | 3.8581 | 1.0 | 1.0 |
| 3.8065 | 6.0 | 237 | 3.7907 | 1.0 | 1.0 |
| 3.766 | 6.9873 | 276 | 3.7769 | 1.0 | 1.0 |
| 3.7552 | 8.0 | 316 | 3.7465 | 1.0 | 1.0 |
| 3.7489 | 8.9873 | 355 | 3.7611 | 1.0 | 1.0 |
| 3.7263 | 10.0 | 395 | 3.7234 | 1.0 | 1.0 |
| 3.7343 | 10.9873 | 434 | 3.6934 | 1.0 | 1.0 |
| 3.6327 | 12.0 | 474 | 3.4204 | 1.0 | 1.0 |
| 3.1861 | 12.9873 | 513 | 2.7907 | 0.9710 | 0.9864 |
| 2.2814 | 14.0 | 553 | 1.7142 | 0.5088 | 0.5401 |
| 1.6854 | 14.9873 | 592 | 1.0573 | 0.2488 | 0.1914 |
| 1.2968 | 16.0 | 632 | 0.7282 | 0.1786 | 0.1391 |
| 0.8626 | 16.9873 | 671 | 0.5435 | 0.1305 | 0.0999 |
| 0.7852 | 18.0 | 711 | 0.4440 | 0.1046 | 0.0831 |
| 0.6332 | 18.9873 | 750 | 0.3847 | 0.0936 | 0.0748 |
| 0.6518 | 19.7468 | 780 | 0.3663 | 0.0863 | 0.0692 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-large-xlsr-53", "model-index": [{"name": "speech_ocean_wav2vec_mdd", "results": []}]} | nrshoudi/speech_ocean_wav2vec_mdd | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:03:12+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #endpoints_compatible #region-us
| speech\_ocean\_wav2vec\_mdd
===========================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3663
* Wer: 0.0863
* Cer: 0.0692
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5987
- F1 Score: 0.6689
- Accuracy: 0.6712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6507 | 1.04 | 200 | 0.6235 | 0.6298 | 0.6572 |
| 0.6163 | 2.08 | 400 | 0.6337 | 0.6501 | 0.6474 |
| 0.6088 | 3.12 | 600 | 0.6120 | 0.6605 | 0.6696 |
| 0.6058 | 4.17 | 800 | 0.6387 | 0.6452 | 0.6426 |
| 0.6026 | 5.21 | 1000 | 0.6129 | 0.6650 | 0.6699 |
| 0.5971 | 6.25 | 1200 | 0.6107 | 0.6691 | 0.6758 |
| 0.5906 | 7.29 | 1400 | 0.6096 | 0.6723 | 0.6732 |
| 0.5903 | 8.33 | 1600 | 0.6159 | 0.6679 | 0.6683 |
| 0.5856 | 9.38 | 1800 | 0.6257 | 0.6653 | 0.6628 |
| 0.5813 | 10.42 | 2000 | 0.6059 | 0.6739 | 0.6807 |
| 0.5826 | 11.46 | 2200 | 0.6015 | 0.6749 | 0.6804 |
| 0.5693 | 12.5 | 2400 | 0.6119 | 0.6757 | 0.6768 |
| 0.5717 | 13.54 | 2600 | 0.6076 | 0.6825 | 0.6849 |
| 0.5682 | 14.58 | 2800 | 0.6147 | 0.6771 | 0.6810 |
| 0.5733 | 15.62 | 3000 | 0.6180 | 0.6786 | 0.6797 |
| 0.5631 | 16.67 | 3200 | 0.6091 | 0.6741 | 0.6777 |
| 0.5629 | 17.71 | 3400 | 0.6161 | 0.6737 | 0.6738 |
| 0.5585 | 18.75 | 3600 | 0.6159 | 0.6766 | 0.6781 |
| 0.5583 | 19.79 | 3800 | 0.6155 | 0.6754 | 0.6761 |
| 0.5534 | 20.83 | 4000 | 0.6086 | 0.6744 | 0.6777 |
| 0.5526 | 21.88 | 4200 | 0.6331 | 0.6719 | 0.6699 |
| 0.5494 | 22.92 | 4400 | 0.6340 | 0.6584 | 0.6562 |
| 0.548 | 23.96 | 4600 | 0.6266 | 0.6708 | 0.6689 |
| 0.5434 | 25.0 | 4800 | 0.6296 | 0.6724 | 0.6719 |
| 0.5406 | 26.04 | 5000 | 0.6316 | 0.6725 | 0.6719 |
| 0.5386 | 27.08 | 5200 | 0.6341 | 0.6677 | 0.6654 |
| 0.5379 | 28.12 | 5400 | 0.6361 | 0.6615 | 0.6592 |
| 0.5376 | 29.17 | 5600 | 0.6392 | 0.6692 | 0.6673 |
| 0.5324 | 30.21 | 5800 | 0.6367 | 0.6721 | 0.6719 |
| 0.5318 | 31.25 | 6000 | 0.6522 | 0.6627 | 0.6601 |
| 0.5309 | 32.29 | 6200 | 0.6281 | 0.6727 | 0.6735 |
| 0.5312 | 33.33 | 6400 | 0.6496 | 0.6649 | 0.6628 |
| 0.5269 | 34.38 | 6600 | 0.6352 | 0.6730 | 0.6732 |
| 0.5276 | 35.42 | 6800 | 0.6384 | 0.6666 | 0.6654 |
| 0.5215 | 36.46 | 7000 | 0.6376 | 0.6667 | 0.6657 |
| 0.5187 | 37.5 | 7200 | 0.6477 | 0.6651 | 0.6634 |
| 0.5203 | 38.54 | 7400 | 0.6438 | 0.6674 | 0.6660 |
| 0.5204 | 39.58 | 7600 | 0.6374 | 0.6764 | 0.6774 |
| 0.5214 | 40.62 | 7800 | 0.6509 | 0.6601 | 0.6579 |
| 0.5147 | 41.67 | 8000 | 0.6436 | 0.6632 | 0.6618 |
| 0.5101 | 42.71 | 8200 | 0.6480 | 0.6678 | 0.6667 |
| 0.5118 | 43.75 | 8400 | 0.6471 | 0.6627 | 0.6608 |
| 0.5142 | 44.79 | 8600 | 0.6467 | 0.6651 | 0.6637 |
| 0.5101 | 45.83 | 8800 | 0.6443 | 0.6689 | 0.6680 |
| 0.5095 | 46.88 | 9000 | 0.6576 | 0.6597 | 0.6572 |
| 0.5116 | 47.92 | 9200 | 0.6527 | 0.6672 | 0.6650 |
| 0.5075 | 48.96 | 9400 | 0.6515 | 0.6657 | 0.6641 |
| 0.5094 | 50.0 | 9600 | 0.6544 | 0.6641 | 0.6621 |
| 0.5094 | 51.04 | 9800 | 0.6532 | 0.6641 | 0.6621 |
| 0.5084 | 52.08 | 10000 | 0.6549 | 0.6658 | 0.6637 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:04:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_8192\_512\_30M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5987
* F1 Score: 0.6689
* Accuracy: 0.6712
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blue_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3527
- F1: 0.9217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3136 | 1.0 | 1250 | 0.5730 | 0.8487 |
| 0.1427 | 2.0 | 2500 | 0.4297 | 0.8980 |
| 0.032 | 3.0 | 3750 | 0.3527 | 0.9217 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "bert-base-cased", "model-index": [{"name": "blue_model", "results": []}]} | TazCaldwell/blue_model | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:05:27+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| blue\_model
===========
This model is a fine-tuned version of bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3527
* F1: 0.9217
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-rw-1b-code-gen-llm-task2
This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6083 | 0.2 | 40 | 1.5153 |
| 1.4854 | 0.4 | 80 | 1.3644 |
| 1.3717 | 0.6 | 120 | 1.2477 |
| 1.244 | 0.8 | 160 | 1.2093 |
| 1.2581 | 1.0 | 200 | 1.1897 |
| 1.1757 | 1.2 | 240 | 1.1816 |
| 1.2085 | 1.4 | 280 | 1.1787 |
| 1.1808 | 1.6 | 320 | 1.1783 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "petals-team/falcon-rw-1b", "model-index": [{"name": "falcon-rw-1b-code-gen-llm-task2", "results": []}]} | Katochh/falcon-rw-1b-code-gen-llm-task2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:petals-team/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T03:06:16+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-petals-team/falcon-rw-1b #license-apache-2.0 #region-us
| falcon-rw-1b-code-gen-llm-task2
===============================
This model is a fine-tuned version of petals-team/falcon-rw-1b on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1783
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 4
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.03
* training\_steps: 320
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 320",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-petals-team/falcon-rw-1b #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 320",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6041
- F1 Score: 0.6807
- Accuracy: 0.6804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6457 | 1.04 | 200 | 0.6181 | 0.6382 | 0.6680 |
| 0.6127 | 2.08 | 400 | 0.6378 | 0.6446 | 0.6422 |
| 0.6013 | 3.12 | 600 | 0.6017 | 0.6766 | 0.6781 |
| 0.5961 | 4.17 | 800 | 0.6127 | 0.6739 | 0.6722 |
| 0.588 | 5.21 | 1000 | 0.6058 | 0.6801 | 0.6810 |
| 0.5822 | 6.25 | 1200 | 0.6042 | 0.6709 | 0.6693 |
| 0.5717 | 7.29 | 1400 | 0.5956 | 0.6866 | 0.6898 |
| 0.5663 | 8.33 | 1600 | 0.6086 | 0.6866 | 0.6862 |
| 0.5612 | 9.38 | 1800 | 0.6295 | 0.6614 | 0.6592 |
| 0.5506 | 10.42 | 2000 | 0.6046 | 0.6740 | 0.6764 |
| 0.5482 | 11.46 | 2200 | 0.6004 | 0.6845 | 0.6872 |
| 0.5316 | 12.5 | 2400 | 0.6010 | 0.6865 | 0.6869 |
| 0.5274 | 13.54 | 2600 | 0.6310 | 0.6798 | 0.6777 |
| 0.5205 | 14.58 | 2800 | 0.6221 | 0.6798 | 0.6797 |
| 0.518 | 15.62 | 3000 | 0.6521 | 0.6711 | 0.6686 |
| 0.5022 | 16.67 | 3200 | 0.6426 | 0.6751 | 0.6729 |
| 0.4934 | 17.71 | 3400 | 0.6603 | 0.6669 | 0.6644 |
| 0.4846 | 18.75 | 3600 | 0.6574 | 0.6803 | 0.6790 |
| 0.4814 | 19.79 | 3800 | 0.6547 | 0.6806 | 0.6784 |
| 0.4681 | 20.83 | 4000 | 0.6634 | 0.6783 | 0.6761 |
| 0.4654 | 21.88 | 4200 | 0.6988 | 0.6739 | 0.6716 |
| 0.4593 | 22.92 | 4400 | 0.7006 | 0.6723 | 0.6699 |
| 0.4447 | 23.96 | 4600 | 0.6885 | 0.6701 | 0.6676 |
| 0.442 | 25.0 | 4800 | 0.7219 | 0.6584 | 0.6562 |
| 0.4321 | 26.04 | 5000 | 0.7074 | 0.6746 | 0.6725 |
| 0.4253 | 27.08 | 5200 | 0.7410 | 0.6664 | 0.6644 |
| 0.421 | 28.12 | 5400 | 0.7354 | 0.6665 | 0.6641 |
| 0.413 | 29.17 | 5600 | 0.7220 | 0.6772 | 0.6755 |
| 0.403 | 30.21 | 5800 | 0.7803 | 0.6734 | 0.6709 |
| 0.4008 | 31.25 | 6000 | 0.7683 | 0.6816 | 0.6794 |
| 0.3923 | 32.29 | 6200 | 0.7666 | 0.6714 | 0.6689 |
| 0.3928 | 33.33 | 6400 | 0.7627 | 0.6825 | 0.6804 |
| 0.3826 | 34.38 | 6600 | 0.7727 | 0.6816 | 0.6804 |
| 0.3825 | 35.42 | 6800 | 0.7577 | 0.6845 | 0.6823 |
| 0.3737 | 36.46 | 7000 | 0.7840 | 0.6772 | 0.6748 |
| 0.3737 | 37.5 | 7200 | 0.7641 | 0.6802 | 0.6781 |
| 0.3696 | 38.54 | 7400 | 0.7842 | 0.6822 | 0.6800 |
| 0.3644 | 39.58 | 7600 | 0.7746 | 0.6836 | 0.6820 |
| 0.3611 | 40.62 | 7800 | 0.8042 | 0.6772 | 0.6748 |
| 0.3527 | 41.67 | 8000 | 0.8161 | 0.6755 | 0.6732 |
| 0.3457 | 42.71 | 8200 | 0.8149 | 0.6791 | 0.6771 |
| 0.3512 | 43.75 | 8400 | 0.8125 | 0.6756 | 0.6732 |
| 0.3513 | 44.79 | 8600 | 0.8198 | 0.6714 | 0.6689 |
| 0.3399 | 45.83 | 8800 | 0.8281 | 0.6813 | 0.6790 |
| 0.3407 | 46.88 | 9000 | 0.8229 | 0.6788 | 0.6764 |
| 0.3407 | 47.92 | 9200 | 0.8400 | 0.6769 | 0.6745 |
| 0.3342 | 48.96 | 9400 | 0.8383 | 0.6797 | 0.6774 |
| 0.3355 | 50.0 | 9600 | 0.8366 | 0.6778 | 0.6755 |
| 0.3338 | 51.04 | 9800 | 0.8430 | 0.6817 | 0.6794 |
| 0.3327 | 52.08 | 10000 | 0.8487 | 0.6814 | 0.6790 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:09:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_8192\_512\_30M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6041
* F1 Score: 0.6807
* Accuracy: 0.6804
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
object-detection | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8
This model is a fine-tuned version of [nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7](https://huggingface.co/nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7", "model-index": [{"name": "detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8", "results": []}]} | nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8 | null | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:10:59+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7 #endpoints_compatible #region-us
|
# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8
This model is a fine-tuned version of nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8\n\nThis model is a fine-tuned version of nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 300\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.0.1\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7 #endpoints_compatible #region-us \n",
"# detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session8\n\nThis model is a fine-tuned version of nsugianto/detr-resnet50_finetuned_lstabledetv1s9_lsdocelementdetv1type3_session7 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 300\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.0.1\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4857
- F1 Score: 0.7690
- Accuracy: 0.7686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6105 | 1.15 | 200 | 0.5704 | 0.7121 | 0.7118 |
| 0.5517 | 2.3 | 400 | 0.6157 | 0.6694 | 0.6779 |
| 0.5265 | 3.45 | 600 | 0.5768 | 0.7038 | 0.7074 |
| 0.5197 | 4.6 | 800 | 0.5693 | 0.7145 | 0.7172 |
| 0.5123 | 5.75 | 1000 | 0.5369 | 0.7354 | 0.7352 |
| 0.5059 | 6.9 | 1200 | 0.5433 | 0.7396 | 0.7395 |
| 0.5013 | 8.05 | 1400 | 0.5393 | 0.7381 | 0.7380 |
| 0.5019 | 9.2 | 1600 | 0.5736 | 0.7145 | 0.7179 |
| 0.4956 | 10.34 | 1800 | 0.5302 | 0.7427 | 0.7424 |
| 0.4964 | 11.49 | 2000 | 0.5296 | 0.7425 | 0.7424 |
| 0.4879 | 12.64 | 2200 | 0.5755 | 0.7235 | 0.7265 |
| 0.4909 | 13.79 | 2400 | 0.5323 | 0.7410 | 0.7413 |
| 0.4862 | 14.94 | 2600 | 0.5214 | 0.7450 | 0.7449 |
| 0.4847 | 16.09 | 2800 | 0.5236 | 0.7532 | 0.7531 |
| 0.4831 | 17.24 | 3000 | 0.5322 | 0.7455 | 0.7456 |
| 0.4791 | 18.39 | 3200 | 0.5421 | 0.7383 | 0.7391 |
| 0.4831 | 19.54 | 3400 | 0.5213 | 0.7479 | 0.7481 |
| 0.4759 | 20.69 | 3600 | 0.5204 | 0.7502 | 0.7499 |
| 0.4773 | 21.84 | 3800 | 0.5315 | 0.7355 | 0.7370 |
| 0.4715 | 22.99 | 4000 | 0.5248 | 0.7465 | 0.7470 |
| 0.4762 | 24.14 | 4200 | 0.5046 | 0.7544 | 0.7539 |
| 0.4647 | 25.29 | 4400 | 0.5273 | 0.7485 | 0.7485 |
| 0.4735 | 26.44 | 4600 | 0.5185 | 0.7506 | 0.7506 |
| 0.4682 | 27.59 | 4800 | 0.5320 | 0.7436 | 0.7445 |
| 0.4669 | 28.74 | 5000 | 0.5183 | 0.7506 | 0.7510 |
| 0.4703 | 29.89 | 5200 | 0.5236 | 0.7516 | 0.7517 |
| 0.4657 | 31.03 | 5400 | 0.5227 | 0.7485 | 0.7488 |
| 0.4666 | 32.18 | 5600 | 0.5091 | 0.7567 | 0.7564 |
| 0.4586 | 33.33 | 5800 | 0.5142 | 0.7546 | 0.7542 |
| 0.4677 | 34.48 | 6000 | 0.5176 | 0.7511 | 0.7513 |
| 0.4587 | 35.63 | 6200 | 0.5129 | 0.7534 | 0.7531 |
| 0.4624 | 36.78 | 6400 | 0.5180 | 0.7514 | 0.7517 |
| 0.4599 | 37.93 | 6600 | 0.5267 | 0.7485 | 0.7488 |
| 0.461 | 39.08 | 6800 | 0.5112 | 0.7532 | 0.7531 |
| 0.4586 | 40.23 | 7000 | 0.5133 | 0.7532 | 0.7531 |
| 0.4601 | 41.38 | 7200 | 0.5209 | 0.7500 | 0.7503 |
| 0.4588 | 42.53 | 7400 | 0.5120 | 0.7525 | 0.7524 |
| 0.4574 | 43.68 | 7600 | 0.5223 | 0.7465 | 0.7470 |
| 0.4576 | 44.83 | 7800 | 0.5229 | 0.7479 | 0.7485 |
| 0.4575 | 45.98 | 8000 | 0.5164 | 0.7502 | 0.7503 |
| 0.4572 | 47.13 | 8200 | 0.5219 | 0.7480 | 0.7485 |
| 0.4537 | 48.28 | 8400 | 0.5148 | 0.7521 | 0.7521 |
| 0.4542 | 49.43 | 8600 | 0.5129 | 0.7540 | 0.7539 |
| 0.4548 | 50.57 | 8800 | 0.5191 | 0.7505 | 0.7506 |
| 0.4561 | 51.72 | 9000 | 0.5211 | 0.7488 | 0.7492 |
| 0.4512 | 52.87 | 9200 | 0.5229 | 0.7495 | 0.7499 |
| 0.457 | 54.02 | 9400 | 0.5188 | 0.7489 | 0.7492 |
| 0.4543 | 55.17 | 9600 | 0.5228 | 0.7490 | 0.7496 |
| 0.4515 | 56.32 | 9800 | 0.5153 | 0.7531 | 0.7531 |
| 0.4579 | 57.47 | 10000 | 0.5171 | 0.7512 | 0.7513 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:11:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_8192\_512\_30M-L1\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4857
* F1 Score: 0.7690
* Accuracy: 0.7686
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chat-dpo-zf-042710-moren-maybecf | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:17:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4725
- F1 Score: 0.7882
- Accuracy: 0.7877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5872 | 1.15 | 200 | 0.5828 | 0.7099 | 0.7118 |
| 0.5275 | 2.3 | 400 | 0.5996 | 0.6879 | 0.6959 |
| 0.5012 | 3.45 | 600 | 0.5458 | 0.7278 | 0.7294 |
| 0.494 | 4.6 | 800 | 0.5236 | 0.7389 | 0.7395 |
| 0.4861 | 5.75 | 1000 | 0.5131 | 0.7522 | 0.7521 |
| 0.4786 | 6.9 | 1200 | 0.5109 | 0.7520 | 0.7517 |
| 0.4735 | 8.05 | 1400 | 0.5289 | 0.7457 | 0.7460 |
| 0.4699 | 9.2 | 1600 | 0.5366 | 0.7307 | 0.7334 |
| 0.4656 | 10.34 | 1800 | 0.5022 | 0.7571 | 0.7567 |
| 0.4624 | 11.49 | 2000 | 0.5082 | 0.7500 | 0.7499 |
| 0.4539 | 12.64 | 2200 | 0.5246 | 0.7475 | 0.7481 |
| 0.4532 | 13.79 | 2400 | 0.5058 | 0.7616 | 0.7614 |
| 0.4484 | 14.94 | 2600 | 0.4923 | 0.7615 | 0.7611 |
| 0.4464 | 16.09 | 2800 | 0.5202 | 0.7580 | 0.7585 |
| 0.4427 | 17.24 | 3000 | 0.5187 | 0.7616 | 0.7618 |
| 0.441 | 18.39 | 3200 | 0.5107 | 0.7643 | 0.7643 |
| 0.4411 | 19.54 | 3400 | 0.4989 | 0.7623 | 0.7621 |
| 0.4317 | 20.69 | 3600 | 0.5000 | 0.7755 | 0.7751 |
| 0.432 | 21.84 | 3800 | 0.5128 | 0.7620 | 0.7621 |
| 0.4255 | 22.99 | 4000 | 0.5228 | 0.7568 | 0.7575 |
| 0.4291 | 24.14 | 4200 | 0.4951 | 0.7673 | 0.7668 |
| 0.416 | 25.29 | 4400 | 0.5074 | 0.7654 | 0.7650 |
| 0.4224 | 26.44 | 4600 | 0.5063 | 0.7691 | 0.7686 |
| 0.4215 | 27.59 | 4800 | 0.5098 | 0.7656 | 0.7654 |
| 0.4145 | 28.74 | 5000 | 0.5032 | 0.7645 | 0.7643 |
| 0.4178 | 29.89 | 5200 | 0.5065 | 0.7691 | 0.7686 |
| 0.412 | 31.03 | 5400 | 0.5218 | 0.7599 | 0.7600 |
| 0.41 | 32.18 | 5600 | 0.5066 | 0.7698 | 0.7693 |
| 0.4034 | 33.33 | 5800 | 0.5072 | 0.7709 | 0.7704 |
| 0.4083 | 34.48 | 6000 | 0.5014 | 0.7673 | 0.7668 |
| 0.4009 | 35.63 | 6200 | 0.5110 | 0.7666 | 0.7661 |
| 0.4009 | 36.78 | 6400 | 0.5065 | 0.7626 | 0.7621 |
| 0.4013 | 37.93 | 6600 | 0.5248 | 0.7629 | 0.7625 |
| 0.3998 | 39.08 | 6800 | 0.5121 | 0.7615 | 0.7611 |
| 0.397 | 40.23 | 7000 | 0.5241 | 0.7625 | 0.7621 |
| 0.3973 | 41.38 | 7200 | 0.5170 | 0.7608 | 0.7603 |
| 0.3942 | 42.53 | 7400 | 0.5102 | 0.7658 | 0.7654 |
| 0.3913 | 43.68 | 7600 | 0.5165 | 0.7644 | 0.7639 |
| 0.3918 | 44.83 | 7800 | 0.5233 | 0.7621 | 0.7618 |
| 0.3916 | 45.98 | 8000 | 0.5160 | 0.7684 | 0.7679 |
| 0.3883 | 47.13 | 8200 | 0.5268 | 0.7643 | 0.7639 |
| 0.3857 | 48.28 | 8400 | 0.5265 | 0.7633 | 0.7629 |
| 0.3841 | 49.43 | 8600 | 0.5217 | 0.7626 | 0.7621 |
| 0.3858 | 50.57 | 8800 | 0.5269 | 0.7579 | 0.7575 |
| 0.3862 | 51.72 | 9000 | 0.5219 | 0.7651 | 0.7647 |
| 0.3793 | 52.87 | 9200 | 0.5349 | 0.7618 | 0.7614 |
| 0.3875 | 54.02 | 9400 | 0.5238 | 0.7651 | 0.7647 |
| 0.3832 | 55.17 | 9600 | 0.5286 | 0.7626 | 0.7621 |
| 0.3805 | 56.32 | 9800 | 0.5197 | 0.7662 | 0.7657 |
| 0.3856 | 57.47 | 10000 | 0.5209 | 0.7659 | 0.7654 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:17:28+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_8192\_512\_30M-L8\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4725
* F1 Score: 0.7882
* Accuracy: 0.7877
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantizations of https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
# From original readme
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat 3.5:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
``` | {"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "Starling-LM-7B-alpha"], "pipeline_tag": "text-generation", "inference": false} | duyntnet/Starling-LM-7B-alpha-imatrix-GGUF | null | [
"transformers",
"gguf",
"imatrix",
"Starling-LM-7B-alpha",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-27T03:18:46+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #imatrix #Starling-LM-7B-alpha #text-generation #en #license-other #region-us
| Quantizations of URL
# From original readme
## Uses
Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.
Our model follows the exact chat template and usage as Openchat 3.5. Please refer to their model card for more details.
In addition, our model is hosted on LMSYS Chatbot Arena for free test.
The conversation template is the same as Openchat 3.5:
## Code Examples
| [
"# From original readme",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat 3.5. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat 3.5:",
"## Code Examples"
] | [
"TAGS\n#transformers #gguf #imatrix #Starling-LM-7B-alpha #text-generation #en #license-other #region-us \n",
"# From original readme",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat 3.5. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat 3.5:",
"## Code Examples"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1"} | liminerity/llama-3-8b-silent-star | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:19:55+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: liminerity
- License: apache-2.0
- Finetuned from model : Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #pytorch #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: liminerity\n- License: apache-2.0\n- Finetuned from model : Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-lora-64-no-quant-2k
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "zephyr-7b-lora-64-no-quant-2k", "results": []}]} | YYYYYYibo/zephyr-7b-lora-64-no-quant-2k | null | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T03:20:21+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us
|
# zephyr-7b-lora-64-no-quant-2k
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | [
"# zephyr-7b-lora-64-no-quant-2k\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- total_eval_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us \n",
"# zephyr-7b-lora-64-no-quant-2k\n\nThis model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 256\n- total_eval_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4638
- F1 Score: 0.7886
- Accuracy: 0.7881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5686 | 1.15 | 200 | 0.5615 | 0.7215 | 0.7233 |
| 0.5079 | 2.3 | 400 | 0.5688 | 0.7010 | 0.7071 |
| 0.4874 | 3.45 | 600 | 0.5293 | 0.7366 | 0.7377 |
| 0.4783 | 4.6 | 800 | 0.5067 | 0.7505 | 0.7506 |
| 0.471 | 5.75 | 1000 | 0.5038 | 0.7570 | 0.7567 |
| 0.4593 | 6.9 | 1200 | 0.5000 | 0.7680 | 0.7675 |
| 0.451 | 8.05 | 1400 | 0.5091 | 0.7601 | 0.7596 |
| 0.4445 | 9.2 | 1600 | 0.5151 | 0.7528 | 0.7531 |
| 0.4361 | 10.34 | 1800 | 0.5131 | 0.7579 | 0.7575 |
| 0.4336 | 11.49 | 2000 | 0.5120 | 0.7658 | 0.7654 |
| 0.4209 | 12.64 | 2200 | 0.5051 | 0.7592 | 0.7589 |
| 0.4155 | 13.79 | 2400 | 0.5164 | 0.7554 | 0.7553 |
| 0.4101 | 14.94 | 2600 | 0.4929 | 0.7690 | 0.7686 |
| 0.4023 | 16.09 | 2800 | 0.5523 | 0.7449 | 0.7460 |
| 0.3963 | 17.24 | 3000 | 0.5205 | 0.7690 | 0.7686 |
| 0.3893 | 18.39 | 3200 | 0.5240 | 0.7604 | 0.7600 |
| 0.3857 | 19.54 | 3400 | 0.5227 | 0.7608 | 0.7603 |
| 0.3733 | 20.69 | 3600 | 0.5274 | 0.7668 | 0.7665 |
| 0.3671 | 21.84 | 3800 | 0.5369 | 0.7570 | 0.7567 |
| 0.3584 | 22.99 | 4000 | 0.5472 | 0.7583 | 0.7582 |
| 0.3573 | 24.14 | 4200 | 0.5395 | 0.7627 | 0.7625 |
| 0.3427 | 25.29 | 4400 | 0.5633 | 0.7579 | 0.7575 |
| 0.3432 | 26.44 | 4600 | 0.5609 | 0.7630 | 0.7625 |
| 0.34 | 27.59 | 4800 | 0.5436 | 0.7630 | 0.7625 |
| 0.3268 | 28.74 | 5000 | 0.5575 | 0.7583 | 0.7578 |
| 0.3327 | 29.89 | 5200 | 0.5748 | 0.7576 | 0.7571 |
| 0.3184 | 31.03 | 5400 | 0.6080 | 0.7481 | 0.7485 |
| 0.3124 | 32.18 | 5600 | 0.6024 | 0.7576 | 0.7571 |
| 0.3023 | 33.33 | 5800 | 0.5905 | 0.7619 | 0.7614 |
| 0.3034 | 34.48 | 6000 | 0.5878 | 0.7565 | 0.7560 |
| 0.296 | 35.63 | 6200 | 0.6280 | 0.7581 | 0.7578 |
| 0.2959 | 36.78 | 6400 | 0.5909 | 0.7576 | 0.7571 |
| 0.2882 | 37.93 | 6600 | 0.6093 | 0.7601 | 0.7596 |
| 0.2842 | 39.08 | 6800 | 0.6144 | 0.7593 | 0.7589 |
| 0.2795 | 40.23 | 7000 | 0.6325 | 0.7634 | 0.7629 |
| 0.2753 | 41.38 | 7200 | 0.6252 | 0.7626 | 0.7621 |
| 0.2725 | 42.53 | 7400 | 0.6288 | 0.7598 | 0.7593 |
| 0.2677 | 43.68 | 7600 | 0.6609 | 0.7544 | 0.7539 |
| 0.2641 | 44.83 | 7800 | 0.6607 | 0.7592 | 0.7589 |
| 0.2631 | 45.98 | 8000 | 0.6491 | 0.7494 | 0.7488 |
| 0.2561 | 47.13 | 8200 | 0.6762 | 0.7568 | 0.7564 |
| 0.2575 | 48.28 | 8400 | 0.6790 | 0.7489 | 0.7485 |
| 0.2553 | 49.43 | 8600 | 0.6813 | 0.7464 | 0.7460 |
| 0.2532 | 50.57 | 8800 | 0.6796 | 0.7554 | 0.7549 |
| 0.2533 | 51.72 | 9000 | 0.6673 | 0.7543 | 0.7539 |
| 0.246 | 52.87 | 9200 | 0.6832 | 0.7511 | 0.7506 |
| 0.2484 | 54.02 | 9400 | 0.6774 | 0.7533 | 0.7528 |
| 0.2451 | 55.17 | 9600 | 0.6841 | 0.7543 | 0.7539 |
| 0.2451 | 56.32 | 9800 | 0.6777 | 0.7551 | 0.7546 |
| 0.2412 | 57.47 | 10000 | 0.6790 | 0.7544 | 0.7539 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:20:51+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_8192\_512\_30M-L32\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4638
* F1 Score: 0.7886
* Accuracy: 0.7881
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3
<Gallery />
## Model description
These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of
generating images for the [Critical Dream](https://github.com/cosmicBboy/critical-dream)
project.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: stabilityai/sdxl-vae.
## Trigger words
You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3/tree/main) them in the Files & versions tab.
## Tracker run link
https://wandb.ai/nielsbantilan/dreambooth-lora-sd-xl/runs/thnecpj9
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "prompt": "a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\"", "widget": [{"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_0.png"}}, {"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_1.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_2.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_3.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_4.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_5.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_6.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_7.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_8.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_9.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_10.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_11.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_12.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_13.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_14.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_15.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_16.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_17.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_18.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_19.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_20.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_21.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_22.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_23.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_24.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_25.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_26.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_27.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_28.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_29.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_30.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_31.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_32.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_33.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_34.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_35.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_36.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_37.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_38.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_39.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_40.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_41.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_42.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_43.png"}}]} | cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 | null | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-27T03:21:15+00:00 | [] | [] | TAGS
#diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3
<Gallery />
## Model description
These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of
generating images for the Critical Dream
project.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: True.
Special VAE used for training: stabilityai/sdxl-vae.
## Trigger words
You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Tracker run link
URL
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3\n\n<Gallery />",
"## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.",
"## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Tracker run link\n\nURL",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3\n\n<Gallery />",
"## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.7.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.",
"## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Tracker run link\n\nURL",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3835
- Precision: 0.6242
- Recall: 0.6563
- F1: 0.6399
- Accuracy: 0.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 368 | 0.3060 | 0.5800 | 0.6174 | 0.5981 | 0.8963 |
| 0.2936 | 2.0 | 736 | 0.2901 | 0.6033 | 0.6240 | 0.6135 | 0.8992 |
| 0.2936 | 3.0 | 1104 | 0.3063 | 0.6304 | 0.6364 | 0.6334 | 0.9052 |
| 0.1156 | 4.0 | 1472 | 0.3404 | 0.6293 | 0.6563 | 0.6425 | 0.9033 |
| 0.1156 | 5.0 | 1840 | 0.3835 | 0.6242 | 0.6563 | 0.6399 | 0.9043 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "results", "results": []}]} | manish1103125/results | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:21:32+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| results
=======
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3835
* Precision: 0.6242
* Recall: 0.6563
* F1: 0.6399
* Accuracy: 0.9043
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5855
- F1 Score: 0.6926
- Accuracy: 0.6929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6611 | 0.87 | 200 | 0.6361 | 0.6438 | 0.6435 |
| 0.6221 | 1.74 | 400 | 0.6143 | 0.6660 | 0.6660 |
| 0.6062 | 2.61 | 600 | 0.6063 | 0.6707 | 0.6704 |
| 0.5939 | 3.48 | 800 | 0.6001 | 0.6788 | 0.6785 |
| 0.5887 | 4.35 | 1000 | 0.6005 | 0.6763 | 0.6764 |
| 0.5844 | 5.22 | 1200 | 0.5994 | 0.6740 | 0.6772 |
| 0.5804 | 6.09 | 1400 | 0.6114 | 0.6712 | 0.6755 |
| 0.5754 | 6.96 | 1600 | 0.5962 | 0.6802 | 0.6807 |
| 0.5673 | 7.83 | 1800 | 0.6015 | 0.6832 | 0.6829 |
| 0.5705 | 8.7 | 2000 | 0.6033 | 0.6827 | 0.6826 |
| 0.5603 | 9.57 | 2200 | 0.5888 | 0.6866 | 0.6864 |
| 0.563 | 10.43 | 2400 | 0.5926 | 0.6934 | 0.6932 |
| 0.5561 | 11.3 | 2600 | 0.5848 | 0.6911 | 0.6916 |
| 0.5567 | 12.17 | 2800 | 0.5865 | 0.6857 | 0.6856 |
| 0.5531 | 13.04 | 3000 | 0.5878 | 0.6938 | 0.6935 |
| 0.549 | 13.91 | 3200 | 0.5881 | 0.6899 | 0.6897 |
| 0.543 | 14.78 | 3400 | 0.5935 | 0.6905 | 0.6908 |
| 0.5421 | 15.65 | 3600 | 0.5829 | 0.6992 | 0.6989 |
| 0.5387 | 16.52 | 3800 | 0.5842 | 0.6934 | 0.6932 |
| 0.5373 | 17.39 | 4000 | 0.5919 | 0.6952 | 0.6954 |
| 0.5384 | 18.26 | 4200 | 0.5845 | 0.6952 | 0.6954 |
| 0.5325 | 19.13 | 4400 | 0.5920 | 0.7038 | 0.7035 |
| 0.5312 | 20.0 | 4600 | 0.5839 | 0.7006 | 0.7008 |
| 0.5317 | 20.87 | 4800 | 0.5872 | 0.7006 | 0.7008 |
| 0.527 | 21.74 | 5000 | 0.5901 | 0.6967 | 0.6967 |
| 0.5234 | 22.61 | 5200 | 0.5887 | 0.7060 | 0.7057 |
| 0.5251 | 23.48 | 5400 | 0.6010 | 0.6930 | 0.6954 |
| 0.5206 | 24.35 | 5600 | 0.5889 | 0.6974 | 0.6973 |
| 0.5227 | 25.22 | 5800 | 0.5965 | 0.6996 | 0.6997 |
| 0.5139 | 26.09 | 6000 | 0.6060 | 0.6994 | 0.7 |
| 0.519 | 26.96 | 6200 | 0.5925 | 0.6994 | 0.7003 |
| 0.514 | 27.83 | 6400 | 0.6074 | 0.6966 | 0.6986 |
| 0.5142 | 28.7 | 6600 | 0.5919 | 0.7015 | 0.7014 |
| 0.5129 | 29.57 | 6800 | 0.5962 | 0.7016 | 0.7014 |
| 0.5069 | 30.43 | 7000 | 0.5923 | 0.7062 | 0.7065 |
| 0.5132 | 31.3 | 7200 | 0.6009 | 0.6981 | 0.6984 |
| 0.5065 | 32.17 | 7400 | 0.6015 | 0.6985 | 0.6986 |
| 0.508 | 33.04 | 7600 | 0.5950 | 0.6975 | 0.6976 |
| 0.5101 | 33.91 | 7800 | 0.5959 | 0.7003 | 0.7008 |
| 0.5028 | 34.78 | 8000 | 0.6005 | 0.6991 | 0.6989 |
| 0.5043 | 35.65 | 8200 | 0.6004 | 0.6992 | 0.6992 |
| 0.5052 | 36.52 | 8400 | 0.5988 | 0.7013 | 0.7014 |
| 0.5001 | 37.39 | 8600 | 0.6034 | 0.6981 | 0.6978 |
| 0.4996 | 38.26 | 8800 | 0.6048 | 0.6971 | 0.6976 |
| 0.5049 | 39.13 | 9000 | 0.6043 | 0.6998 | 0.7 |
| 0.5001 | 40.0 | 9200 | 0.6024 | 0.7026 | 0.7024 |
| 0.4987 | 40.87 | 9400 | 0.6031 | 0.6970 | 0.6967 |
| 0.4975 | 41.74 | 9600 | 0.6039 | 0.6999 | 0.6997 |
| 0.5044 | 42.61 | 9800 | 0.6008 | 0.7012 | 0.7011 |
| 0.4979 | 43.48 | 10000 | 0.6025 | 0.7017 | 0.7016 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:22:47+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_8192\_512\_30M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5855
* F1 Score: 0.6926
* Accuracy: 0.6929
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5785
- F1 Score: 0.6982
- Accuracy: 0.6984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6693 | 0.87 | 200 | 0.6534 | 0.6222 | 0.6236 |
| 0.6427 | 1.74 | 400 | 0.6303 | 0.6502 | 0.6503 |
| 0.6307 | 2.61 | 600 | 0.6191 | 0.6628 | 0.6625 |
| 0.6147 | 3.48 | 800 | 0.6102 | 0.6633 | 0.6633 |
| 0.6125 | 4.35 | 1000 | 0.6106 | 0.6576 | 0.6592 |
| 0.6062 | 5.22 | 1200 | 0.6167 | 0.6607 | 0.6639 |
| 0.6032 | 6.09 | 1400 | 0.6192 | 0.6533 | 0.6584 |
| 0.5988 | 6.96 | 1600 | 0.6095 | 0.6696 | 0.6701 |
| 0.5959 | 7.83 | 1800 | 0.6071 | 0.6671 | 0.6677 |
| 0.5949 | 8.7 | 2000 | 0.6028 | 0.6736 | 0.6734 |
| 0.5888 | 9.57 | 2200 | 0.5976 | 0.6785 | 0.6783 |
| 0.5926 | 10.43 | 2400 | 0.5974 | 0.6799 | 0.6796 |
| 0.5889 | 11.3 | 2600 | 0.5984 | 0.6801 | 0.6810 |
| 0.5877 | 12.17 | 2800 | 0.5987 | 0.6783 | 0.6780 |
| 0.587 | 13.04 | 3000 | 0.5950 | 0.6806 | 0.6804 |
| 0.5847 | 13.91 | 3200 | 0.5936 | 0.6816 | 0.6815 |
| 0.5823 | 14.78 | 3400 | 0.5943 | 0.6798 | 0.6807 |
| 0.5816 | 15.65 | 3600 | 0.5929 | 0.6830 | 0.6832 |
| 0.5793 | 16.52 | 3800 | 0.5972 | 0.6814 | 0.6815 |
| 0.5786 | 17.39 | 4000 | 0.5914 | 0.6868 | 0.6867 |
| 0.5773 | 18.26 | 4200 | 0.5954 | 0.6863 | 0.6861 |
| 0.576 | 19.13 | 4400 | 0.5976 | 0.6855 | 0.6853 |
| 0.5754 | 20.0 | 4600 | 0.5908 | 0.6883 | 0.6886 |
| 0.578 | 20.87 | 4800 | 0.5926 | 0.6828 | 0.6829 |
| 0.5744 | 21.74 | 5000 | 0.5937 | 0.6859 | 0.6864 |
| 0.5723 | 22.61 | 5200 | 0.5884 | 0.6909 | 0.6908 |
| 0.5747 | 23.48 | 5400 | 0.5952 | 0.6837 | 0.6853 |
| 0.5696 | 24.35 | 5600 | 0.5902 | 0.6907 | 0.6905 |
| 0.5742 | 25.22 | 5800 | 0.5922 | 0.6866 | 0.6878 |
| 0.5682 | 26.09 | 6000 | 0.5960 | 0.6856 | 0.6864 |
| 0.5728 | 26.96 | 6200 | 0.5908 | 0.6881 | 0.6889 |
| 0.5687 | 27.83 | 6400 | 0.5986 | 0.6824 | 0.6851 |
| 0.5667 | 28.7 | 6600 | 0.5913 | 0.6876 | 0.6880 |
| 0.5675 | 29.57 | 6800 | 0.5865 | 0.6906 | 0.6905 |
| 0.5655 | 30.43 | 7000 | 0.5901 | 0.6881 | 0.6891 |
| 0.5702 | 31.3 | 7200 | 0.5908 | 0.6847 | 0.6856 |
| 0.5655 | 32.17 | 7400 | 0.5908 | 0.6875 | 0.6883 |
| 0.5673 | 33.04 | 7600 | 0.5842 | 0.6899 | 0.6899 |
| 0.567 | 33.91 | 7800 | 0.5884 | 0.6889 | 0.6894 |
| 0.5643 | 34.78 | 8000 | 0.5900 | 0.6898 | 0.6899 |
| 0.5648 | 35.65 | 8200 | 0.5865 | 0.6928 | 0.6929 |
| 0.5646 | 36.52 | 8400 | 0.5887 | 0.6902 | 0.6908 |
| 0.5655 | 37.39 | 8600 | 0.5885 | 0.6903 | 0.6905 |
| 0.5614 | 38.26 | 8800 | 0.5922 | 0.6897 | 0.6905 |
| 0.5687 | 39.13 | 9000 | 0.5876 | 0.6902 | 0.6910 |
| 0.5637 | 40.0 | 9200 | 0.5869 | 0.6919 | 0.6921 |
| 0.561 | 40.87 | 9400 | 0.5883 | 0.6917 | 0.6916 |
| 0.5598 | 41.74 | 9600 | 0.5889 | 0.6922 | 0.6924 |
| 0.5699 | 42.61 | 9800 | 0.5862 | 0.6905 | 0.6908 |
| 0.5619 | 43.48 | 10000 | 0.5868 | 0.6917 | 0.6918 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:22:47+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_8192\_512\_30M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5785
* F1 Score: 0.6982
* Accuracy: 0.6984
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Qwen1.5-110B-Chat
## About Quantization
我们使用modelscope [swift](https://github.com/modelscope/swift/)仓库进行AWQ量化. 量化文档可以查看[这里](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E9%87%8F%E5%8C%96%E6%96%87%E6%A1%A3.md). 量化命令如下:
We use the modelscope [swift](https://github.com/modelscope/swift/) repository to perform GPTQ quantization. Quantization documentation can be found [here](https://github.com/modelscope/swift/blob/main/docs/source_en/LLM/LLM-quantization.md). The quantization command is as follows:
```bash
CUDA_VISIBLE_DEVICES=0 swift export \
--model_type qwen1half-110b-chat --quant_bits 4 \
--dataset sharegpt-gpt4-mini alpaca-zh alpaca-en \
--quant_method awq --quant_seqlen 8192 --quant_n_samples 512
```
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"study-hjt/Qwen1.5-110B-Chat-AWQ",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("study-hjt/Qwen1.5-110B-Chat-AWQ")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
| {"language": ["en"], "license": "other", "tags": ["chat", "qwen", "awq", "int4", "4bits"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation"} | study-hjt/Qwen1.5-110B-Chat-AWQ | null | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"qwen",
"awq",
"int4",
"4bits",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T03:24:04+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #qwen2 #text-generation #chat #qwen #awq #int4 #4bits #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Qwen1.5-110B-Chat
## About Quantization
我们使用modelscope swift仓库进行AWQ量化. 量化文档可以查看这里. 量化命令如下:
We use the modelscope swift repository to perform GPTQ quantization. Quantization documentation can be found here. The quantization command is as follows:
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of 'trust_remote_code'.
For more details, please refer to our blog post and GitHub repo.
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:
## Quickstart
Here provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.
If you find our work helpful, feel free to give us a cite.
| [
"# Qwen1.5-110B-Chat",
"## About Quantization\n我们使用modelscope swift仓库进行AWQ量化. 量化文档可以查看这里. 量化命令如下:\n\nWe use the modelscope swift repository to perform GPTQ quantization. Quantization documentation can be found here. The quantization command is as follows:",
"## Introduction\n\nQwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: \n\n* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;\n* Significant performance improvement in human preference for chat models;\n* Multilingual support of both base and chat models;\n* Stable support of 32K context length for models of all sizes\n* No need of 'trust_remote_code'.\n\nFor more details, please refer to our blog post and GitHub repo.\n<br>",
"## Model Details\nQwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.",
"## Training details\nWe pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.",
"## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:",
"## Quickstart\n\nHere provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.",
"## Tips\n\n* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.\n\n\nIf you find our work helpful, feel free to give us a cite."
] | [
"TAGS\n#transformers #safetensors #qwen2 #text-generation #chat #qwen #awq #int4 #4bits #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Qwen1.5-110B-Chat",
"## About Quantization\n我们使用modelscope swift仓库进行AWQ量化. 量化文档可以查看这里. 量化命令如下:\n\nWe use the modelscope swift repository to perform GPTQ quantization. Quantization documentation can be found here. The quantization command is as follows:",
"## Introduction\n\nQwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: \n\n* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;\n* Significant performance improvement in human preference for chat models;\n* Multilingual support of both base and chat models;\n* Stable support of 32K context length for models of all sizes\n* No need of 'trust_remote_code'.\n\nFor more details, please refer to our blog post and GitHub repo.\n<br>",
"## Model Details\nQwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.",
"## Training details\nWe pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.",
"## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:",
"## Quickstart\n\nHere provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.",
"## Tips\n\n* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.\n\n\nIf you find our work helpful, feel free to give us a cite."
] |
token-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | manish1103125/NER-Task1 | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:24:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** vutuka
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | vutuka/llama-3-8b-african-aya-lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:26:54+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: vutuka
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: vutuka\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: vutuka\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sherrys/426_mistral_RAFT_50e_10s | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T03:27:29+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/x1dccfy | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:29:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/vumpzdo | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:29:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/blgymh6 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:29:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/9xu4ir3 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:29:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/ko2vss3 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:29:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/18jgts7 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:29:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** vutuka
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en", "sw", "af", "fr", "yo", "am", "ar", "pt", "ig"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "datasets": ["vutuka/aya_african_alpaca"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | vutuka/llama-3-8b-african-aya-gguf-8bit | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"sw",
"af",
"fr",
"yo",
"am",
"ar",
"pt",
"ig",
"dataset:vutuka/aya_african_alpaca",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:36:39+00:00 | [] | [
"en",
"sw",
"af",
"fr",
"yo",
"am",
"ar",
"pt",
"ig"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #sw #af #fr #yo #am #ar #pt #ig #dataset-vutuka/aya_african_alpaca #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: vutuka
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/> | [
"# Uploaded model\n\n- Developed by: vutuka\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #sw #af #fr #yo #am #ar #pt #ig #dataset-vutuka/aya_african_alpaca #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: vutuka\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="liqiu0202/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | liqiu0202/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-04-27T03:39:10+00:00 | [] | [] | TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
| [
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] | [
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-filtered-50-0.003 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:39:13+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Beans_disease_classficationv4
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0419
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0023 | 1.0 | 17 | 0.1371 | 0.9774 |
| 0.002 | 2.0 | 34 | 0.0993 | 0.9774 |
| 0.0234 | 3.0 | 51 | 0.0419 | 0.9925 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3 | {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["AI-Lab-Makerere/beans"], "metrics": ["accuracy"], "model-index": [{"name": "Beans_disease_classficationv4", "results": []}]} | pwk666/Beans_disease_classficationv4 | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:AI-Lab-Makerere/beans",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:41:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #generated_from_trainer #en #dataset-AI-Lab-Makerere/beans #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Beans\_disease\_classficationv4
===============================
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0419
* Accuracy: 0.9925
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 64
* eval\_batch\_size: 32
* seed: 1337
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.28.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #generated_from_trainer #en #dataset-AI-Lab-Makerere/beans #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6576
- F1 Score: 0.7095
- Accuracy: 0.7095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.648 | 0.87 | 200 | 0.6101 | 0.6657 | 0.6655 |
| 0.607 | 1.74 | 400 | 0.6051 | 0.6739 | 0.6755 |
| 0.5902 | 2.61 | 600 | 0.5977 | 0.6826 | 0.6823 |
| 0.5802 | 3.48 | 800 | 0.5912 | 0.6902 | 0.6899 |
| 0.5747 | 4.35 | 1000 | 0.5913 | 0.6863 | 0.6861 |
| 0.568 | 5.22 | 1200 | 0.5884 | 0.6939 | 0.6957 |
| 0.5604 | 6.09 | 1400 | 0.6068 | 0.6851 | 0.6891 |
| 0.5541 | 6.96 | 1600 | 0.5876 | 0.6939 | 0.6943 |
| 0.5426 | 7.83 | 1800 | 0.5863 | 0.6967 | 0.6965 |
| 0.5431 | 8.7 | 2000 | 0.5971 | 0.6922 | 0.6921 |
| 0.5313 | 9.57 | 2200 | 0.5867 | 0.6924 | 0.6921 |
| 0.5298 | 10.43 | 2400 | 0.5992 | 0.6965 | 0.6962 |
| 0.5217 | 11.3 | 2600 | 0.5850 | 0.6947 | 0.6951 |
| 0.5217 | 12.17 | 2800 | 0.6071 | 0.6792 | 0.6804 |
| 0.5125 | 13.04 | 3000 | 0.5930 | 0.6983 | 0.6981 |
| 0.5045 | 13.91 | 3200 | 0.6043 | 0.7008 | 0.7005 |
| 0.4953 | 14.78 | 3400 | 0.6141 | 0.6969 | 0.6978 |
| 0.4921 | 15.65 | 3600 | 0.6001 | 0.7054 | 0.7052 |
| 0.4848 | 16.52 | 3800 | 0.5976 | 0.6992 | 0.6989 |
| 0.4793 | 17.39 | 4000 | 0.6249 | 0.7014 | 0.7019 |
| 0.4798 | 18.26 | 4200 | 0.6202 | 0.6972 | 0.6978 |
| 0.4693 | 19.13 | 4400 | 0.6179 | 0.6989 | 0.6986 |
| 0.4657 | 20.0 | 4600 | 0.6190 | 0.6920 | 0.6921 |
| 0.4592 | 20.87 | 4800 | 0.6277 | 0.6969 | 0.6967 |
| 0.4517 | 21.74 | 5000 | 0.6353 | 0.6970 | 0.6967 |
| 0.4494 | 22.61 | 5200 | 0.6344 | 0.6977 | 0.6978 |
| 0.445 | 23.48 | 5400 | 0.6328 | 0.6964 | 0.6967 |
| 0.4388 | 24.35 | 5600 | 0.6401 | 0.6945 | 0.6943 |
| 0.4357 | 25.22 | 5800 | 0.6670 | 0.6972 | 0.6973 |
| 0.4274 | 26.09 | 6000 | 0.6696 | 0.7014 | 0.7014 |
| 0.4281 | 26.96 | 6200 | 0.6444 | 0.7005 | 0.7005 |
| 0.4162 | 27.83 | 6400 | 0.6686 | 0.7077 | 0.7076 |
| 0.4204 | 28.7 | 6600 | 0.6702 | 0.6922 | 0.6921 |
| 0.414 | 29.57 | 6800 | 0.6759 | 0.6919 | 0.6916 |
| 0.4063 | 30.43 | 7000 | 0.6645 | 0.6951 | 0.6948 |
| 0.4118 | 31.3 | 7200 | 0.6744 | 0.6946 | 0.6943 |
| 0.4015 | 32.17 | 7400 | 0.6699 | 0.6989 | 0.6986 |
| 0.3984 | 33.04 | 7600 | 0.6737 | 0.7026 | 0.7024 |
| 0.4009 | 33.91 | 7800 | 0.6726 | 0.6994 | 0.6992 |
| 0.3918 | 34.78 | 8000 | 0.6883 | 0.7000 | 0.6997 |
| 0.3906 | 35.65 | 8200 | 0.6940 | 0.6959 | 0.6957 |
| 0.393 | 36.52 | 8400 | 0.6872 | 0.6976 | 0.6973 |
| 0.3876 | 37.39 | 8600 | 0.6973 | 0.7008 | 0.7005 |
| 0.3806 | 38.26 | 8800 | 0.7024 | 0.6989 | 0.6986 |
| 0.386 | 39.13 | 9000 | 0.7013 | 0.7006 | 0.7003 |
| 0.3822 | 40.0 | 9200 | 0.6997 | 0.6972 | 0.6970 |
| 0.381 | 40.87 | 9400 | 0.7042 | 0.7011 | 0.7008 |
| 0.3766 | 41.74 | 9600 | 0.7011 | 0.6973 | 0.6970 |
| 0.3796 | 42.61 | 9800 | 0.7035 | 0.6951 | 0.6948 |
| 0.3775 | 43.48 | 10000 | 0.7048 | 0.6956 | 0.6954 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:41:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_8192\_512\_30M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6576
* F1 Score: 0.7095
* Accuracy: 0.7095
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | Mervyn999/mistral-7b-platypus | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T03:43:23+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# santhosh207/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [santhosh207/distilbert-base-uncased-finetuned-ner](https://huggingface.co/santhosh207/distilbert-base-uncased-finetuned-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1538
- Validation Loss: 0.4292
- Train Precision: 0.4306
- Train Recall: 0.1479
- Train F1: 0.2201
- Train Accuracy: 0.9093
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 424, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1538 | 0.4292 | 0.4306 | 0.1479 | 0.2201 | 0.9093 | 0 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "santhosh207/distilbert-base-uncased-finetuned-ner", "model-index": [{"name": "santhosh207/distilbert-base-uncased-finetuned-ner", "results": []}]} | santhosh207/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:santhosh207/distilbert-base-uncased-finetuned-ner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:44:06+00:00 | [] | [] | TAGS
#transformers #tf #tensorboard #distilbert #token-classification #generated_from_keras_callback #base_model-santhosh207/distilbert-base-uncased-finetuned-ner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| santhosh207/distilbert-base-uncased-finetuned-ner
=================================================
This model is a fine-tuned version of santhosh207/distilbert-base-uncased-finetuned-ner on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.1538
* Validation Loss: 0.4292
* Train Precision: 0.4306
* Train Recall: 0.1479
* Train F1: 0.2201
* Train Accuracy: 0.9093
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 424, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 424, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #tensorboard #distilbert #token-classification #generated_from_keras_callback #base_model-santhosh207/distilbert-base-uncased-finetuned-ner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 424, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null |
# DavidAU/Octopus-v2-Q8_0-GGUF
This model was converted to GGUF format from [`NexaAIDev/Octopus-v2`](https://huggingface.co/NexaAIDev/Octopus-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NexaAIDev/Octopus-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Octopus-v2-Q8_0-GGUF --model octopus-v2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Octopus-v2-Q8_0-GGUF --model octopus-v2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m octopus-v2.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["function calling", "on-device language model", "android", "llama-cpp", "gguf-my-repo"], "base_model": "google/gemma-2b", "inference": false, "space": false, "spaces": false, "model-index": [{"name": "Octopus-V2-2B", "results": []}]} | DavidAU/Octopus-v2-Q8_0-GGUF | null | [
"gguf",
"function calling",
"on-device language model",
"android",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:google/gemma-2b",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-27T03:46:15+00:00 | [] | [
"en"
] | TAGS
#gguf #function calling #on-device language model #android #llama-cpp #gguf-my-repo #en #base_model-google/gemma-2b #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Octopus-v2-Q8_0-GGUF
This model was converted to GGUF format from 'NexaAIDev/Octopus-v2' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Octopus-v2-Q8_0-GGUF\nThis model was converted to GGUF format from 'NexaAIDev/Octopus-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #function calling #on-device language model #android #llama-cpp #gguf-my-repo #en #base_model-google/gemma-2b #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Octopus-v2-Q8_0-GGUF\nThis model was converted to GGUF format from 'NexaAIDev/Octopus-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2668
- F1 Score: 0.9042
- Accuracy: 0.9042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4016 | 2.17 | 200 | 0.3093 | 0.8847 | 0.8843 |
| 0.2964 | 4.35 | 400 | 0.2958 | 0.8888 | 0.8884 |
| 0.283 | 6.52 | 600 | 0.2886 | 0.8907 | 0.8905 |
| 0.2802 | 8.7 | 800 | 0.2837 | 0.8927 | 0.8925 |
| 0.2722 | 10.87 | 1000 | 0.2801 | 0.8925 | 0.8925 |
| 0.2687 | 13.04 | 1200 | 0.2870 | 0.8915 | 0.8912 |
| 0.2618 | 15.22 | 1400 | 0.2740 | 0.8946 | 0.8946 |
| 0.2601 | 17.39 | 1600 | 0.2724 | 0.9002 | 0.9001 |
| 0.257 | 19.57 | 1800 | 0.2734 | 0.8987 | 0.8987 |
| 0.2554 | 21.74 | 2000 | 0.2875 | 0.8881 | 0.8877 |
| 0.2487 | 23.91 | 2200 | 0.2870 | 0.8901 | 0.8898 |
| 0.2503 | 26.09 | 2400 | 0.2836 | 0.8887 | 0.8884 |
| 0.245 | 28.26 | 2600 | 0.2713 | 0.8952 | 0.8953 |
| 0.2428 | 30.43 | 2800 | 0.2788 | 0.8914 | 0.8912 |
| 0.2393 | 32.61 | 3000 | 0.2767 | 0.8981 | 0.8980 |
| 0.2372 | 34.78 | 3200 | 0.2764 | 0.8913 | 0.8912 |
| 0.2383 | 36.96 | 3400 | 0.2766 | 0.8954 | 0.8953 |
| 0.2335 | 39.13 | 3600 | 0.2768 | 0.8966 | 0.8966 |
| 0.2297 | 41.3 | 3800 | 0.2784 | 0.8993 | 0.8994 |
| 0.2283 | 43.48 | 4000 | 0.2866 | 0.8911 | 0.8912 |
| 0.235 | 45.65 | 4200 | 0.2793 | 0.8943 | 0.8946 |
| 0.2271 | 47.83 | 4400 | 0.2771 | 0.8959 | 0.8960 |
| 0.2257 | 50.0 | 4600 | 0.2761 | 0.8925 | 0.8925 |
| 0.2237 | 52.17 | 4800 | 0.2727 | 0.9001 | 0.9001 |
| 0.2266 | 54.35 | 5000 | 0.2853 | 0.8934 | 0.8932 |
| 0.2203 | 56.52 | 5200 | 0.2904 | 0.8914 | 0.8912 |
| 0.2184 | 58.7 | 5400 | 0.2832 | 0.8933 | 0.8932 |
| 0.216 | 60.87 | 5600 | 0.2955 | 0.8873 | 0.8871 |
| 0.218 | 63.04 | 5800 | 0.2929 | 0.8866 | 0.8864 |
| 0.2166 | 65.22 | 6000 | 0.2891 | 0.8927 | 0.8925 |
| 0.2161 | 67.39 | 6200 | 0.2840 | 0.8940 | 0.8939 |
| 0.2122 | 69.57 | 6400 | 0.2867 | 0.8961 | 0.8960 |
| 0.2138 | 71.74 | 6600 | 0.2875 | 0.8939 | 0.8939 |
| 0.2138 | 73.91 | 6800 | 0.2846 | 0.8953 | 0.8953 |
| 0.21 | 76.09 | 7000 | 0.2908 | 0.8872 | 0.8871 |
| 0.211 | 78.26 | 7200 | 0.2894 | 0.8934 | 0.8932 |
| 0.2071 | 80.43 | 7400 | 0.2900 | 0.8891 | 0.8891 |
| 0.2095 | 82.61 | 7600 | 0.2854 | 0.8918 | 0.8919 |
| 0.2119 | 84.78 | 7800 | 0.2875 | 0.8905 | 0.8905 |
| 0.2056 | 86.96 | 8000 | 0.2869 | 0.8884 | 0.8884 |
| 0.2087 | 89.13 | 8200 | 0.2868 | 0.8919 | 0.8919 |
| 0.2078 | 91.3 | 8400 | 0.2907 | 0.8864 | 0.8864 |
| 0.2015 | 93.48 | 8600 | 0.2913 | 0.8876 | 0.8877 |
| 0.2047 | 95.65 | 8800 | 0.2891 | 0.8890 | 0.8891 |
| 0.2057 | 97.83 | 9000 | 0.2881 | 0.8864 | 0.8864 |
| 0.2044 | 100.0 | 9200 | 0.2899 | 0.8864 | 0.8864 |
| 0.2065 | 102.17 | 9400 | 0.2871 | 0.8884 | 0.8884 |
| 0.2046 | 104.35 | 9600 | 0.2894 | 0.8878 | 0.8877 |
| 0.2024 | 106.52 | 9800 | 0.2879 | 0.8884 | 0.8884 |
| 0.2046 | 108.7 | 10000 | 0.2888 | 0.8871 | 0.8871 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:46:21+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H4-seqsight\_8192\_512\_30M-L1\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2668
* F1 Score: 0.9042
* Accuracy: 0.9042
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2671
- F1 Score: 0.9090
- Accuracy: 0.9090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3675 | 2.17 | 200 | 0.2843 | 0.8927 | 0.8925 |
| 0.283 | 4.35 | 400 | 0.2795 | 0.8969 | 0.8966 |
| 0.2681 | 6.52 | 600 | 0.2709 | 0.8974 | 0.8973 |
| 0.2607 | 8.7 | 800 | 0.2902 | 0.8834 | 0.8830 |
| 0.2506 | 10.87 | 1000 | 0.2741 | 0.8905 | 0.8905 |
| 0.2438 | 13.04 | 1200 | 0.2707 | 0.8959 | 0.8960 |
| 0.2325 | 15.22 | 1400 | 0.2902 | 0.8901 | 0.8898 |
| 0.227 | 17.39 | 1600 | 0.2871 | 0.8833 | 0.8830 |
| 0.2215 | 19.57 | 1800 | 0.2891 | 0.8941 | 0.8939 |
| 0.2144 | 21.74 | 2000 | 0.2822 | 0.8920 | 0.8919 |
| 0.2059 | 23.91 | 2200 | 0.2810 | 0.8992 | 0.8994 |
| 0.2035 | 26.09 | 2400 | 0.2712 | 0.8959 | 0.8960 |
| 0.1918 | 28.26 | 2600 | 0.2774 | 0.9000 | 0.9001 |
| 0.1881 | 30.43 | 2800 | 0.2864 | 0.8898 | 0.8898 |
| 0.1812 | 32.61 | 3000 | 0.2916 | 0.8936 | 0.8939 |
| 0.1766 | 34.78 | 3200 | 0.2911 | 0.8940 | 0.8939 |
| 0.1745 | 36.96 | 3400 | 0.2998 | 0.8932 | 0.8932 |
| 0.1679 | 39.13 | 3600 | 0.2944 | 0.8916 | 0.8919 |
| 0.1595 | 41.3 | 3800 | 0.3164 | 0.8902 | 0.8905 |
| 0.1568 | 43.48 | 4000 | 0.3132 | 0.8939 | 0.8939 |
| 0.1567 | 45.65 | 4200 | 0.3105 | 0.8894 | 0.8898 |
| 0.1494 | 47.83 | 4400 | 0.3210 | 0.8883 | 0.8884 |
| 0.1446 | 50.0 | 4600 | 0.3191 | 0.8861 | 0.8864 |
| 0.1435 | 52.17 | 4800 | 0.3296 | 0.8879 | 0.8884 |
| 0.141 | 54.35 | 5000 | 0.3251 | 0.8868 | 0.8871 |
| 0.1379 | 56.52 | 5200 | 0.3268 | 0.8848 | 0.8850 |
| 0.1322 | 58.7 | 5400 | 0.3385 | 0.8876 | 0.8877 |
| 0.1268 | 60.87 | 5600 | 0.3419 | 0.8827 | 0.8830 |
| 0.1255 | 63.04 | 5800 | 0.3518 | 0.8837 | 0.8836 |
| 0.1257 | 65.22 | 6000 | 0.3507 | 0.8848 | 0.8850 |
| 0.1243 | 67.39 | 6200 | 0.3453 | 0.8871 | 0.8871 |
| 0.1151 | 69.57 | 6400 | 0.3665 | 0.8842 | 0.8843 |
| 0.1137 | 71.74 | 6600 | 0.3716 | 0.8835 | 0.8836 |
| 0.1175 | 73.91 | 6800 | 0.3582 | 0.8836 | 0.8836 |
| 0.1119 | 76.09 | 7000 | 0.3703 | 0.8829 | 0.8830 |
| 0.1102 | 78.26 | 7200 | 0.3807 | 0.8771 | 0.8775 |
| 0.1062 | 80.43 | 7400 | 0.3845 | 0.8725 | 0.8727 |
| 0.1085 | 82.61 | 7600 | 0.3857 | 0.8755 | 0.8761 |
| 0.1057 | 84.78 | 7800 | 0.3874 | 0.8827 | 0.8830 |
| 0.1028 | 86.96 | 8000 | 0.3859 | 0.8753 | 0.8754 |
| 0.1033 | 89.13 | 8200 | 0.3981 | 0.8738 | 0.8741 |
| 0.101 | 91.3 | 8400 | 0.4096 | 0.8750 | 0.8754 |
| 0.0943 | 93.48 | 8600 | 0.4177 | 0.8772 | 0.8775 |
| 0.0972 | 95.65 | 8800 | 0.4087 | 0.8791 | 0.8795 |
| 0.0966 | 97.83 | 9000 | 0.4152 | 0.8763 | 0.8768 |
| 0.0963 | 100.0 | 9200 | 0.4153 | 0.8717 | 0.8720 |
| 0.0989 | 102.17 | 9400 | 0.4139 | 0.8756 | 0.8761 |
| 0.0936 | 104.35 | 9600 | 0.4140 | 0.8738 | 0.8741 |
| 0.0933 | 106.52 | 9800 | 0.4157 | 0.8771 | 0.8775 |
| 0.097 | 108.7 | 10000 | 0.4160 | 0.8764 | 0.8768 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:47:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H4-seqsight\_8192\_512\_30M-L8\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2671
* F1 Score: 0.9090
* Accuracy: 0.9090
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test5.4-8B-10`](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.4-8B-10) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.4-8B-10) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF --model keiana-l3-test5.4-8b-10.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF --model keiana-l3-test5.4-8b-10.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test5.4-8b-10.Q6_K.gguf -n 128
```
| {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Experimenting-Test4.5-8B-2", "cgato/L3-TheSpice-8b-v0.8.3", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Experimenting-Test4.5-8B-2", "cgato/L3-TheSpice-8b-v0.8.3"]} | Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"Kaoeiri/Experimenting-Test4.5-8B-2",
"cgato/L3-TheSpice-8b-v0.8.3",
"llama-cpp",
"gguf-my-repo",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"base_model:Kaoeiri/Experimenting-Test4.5-8B-2",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"region:us"
] | null | 2024-04-27T03:47:44+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.7-8B-3 #Kaoeiri/Experimenting-Test4.5-8B-2 #cgato/L3-TheSpice-8b-v0.8.3 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #base_model-Kaoeiri/Experimenting-Test4.5-8B-2 #base_model-cgato/L3-TheSpice-8b-v0.8.3 #region-us
|
# Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF
This model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.4-8B-10' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.4-8B-10' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.7-8B-3 #Kaoeiri/Experimenting-Test4.5-8B-2 #cgato/L3-TheSpice-8b-v0.8.3 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #base_model-Kaoeiri/Experimenting-Test4.5-8B-2 #base_model-cgato/L3-TheSpice-8b-v0.8.3 #region-us \n",
"# Kaoeiri/Keiana-L3-Test5.4-8B-10-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.4-8B-10' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Uploaded model
- **Developed by:** vutuka
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | vutuka/llama-3-8b-african-aya-f16 | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:48:14+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: vutuka
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: vutuka\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: vutuka\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# gate369/llama-3-8b-silent-star-Q4_K_M-GGUF
This model was converted to GGUF format from [`liminerity/llama-3-8b-silent-star`](https://huggingface.co/liminerity/llama-3-8b-silent-star) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/liminerity/llama-3-8b-silent-star) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo gate369/llama-3-8b-silent-star-Q4_K_M-GGUF --model llama-3-8b-silent-star.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo gate369/llama-3-8b-silent-star-Q4_K_M-GGUF --model llama-3-8b-silent-star.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-silent-star.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1"} | gate369/llama-3-8b-silent-star-Q4_K_M-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T03:49:18+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #text-generation-inference #unsloth #llama #trl #llama-cpp #gguf-my-repo #en #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #license-apache-2.0 #endpoints_compatible #region-us
|
# gate369/llama-3-8b-silent-star-Q4_K_M-GGUF
This model was converted to GGUF format from 'liminerity/llama-3-8b-silent-star' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# gate369/llama-3-8b-silent-star-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'liminerity/llama-3-8b-silent-star' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #llama #trl #llama-cpp #gguf-my-repo #en #base_model-Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# gate369/llama-3-8b-silent-star-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'liminerity/llama-3-8b-silent-star' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2_DocBot_SonatafyAI_V2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3848 | 1.0 | 3615 | 3.2728 |
| 3.1553 | 2.0 | 7230 | 3.1955 |
| 2.9906 | 3.0 | 10845 | 3.1657 |
| 2.8988 | 4.0 | 14460 | 3.1610 |
| 2.8482 | 5.0 | 18075 | 3.1668 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "GPT2_DocBot_SonatafyAI_V2", "results": []}]} | ajtamayoh/GPT2_DocBot_SonatafyAI_V2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T03:51:02+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT2\_DocBot\_SonatafyAI\_V2
============================
This model is a fine-tuned version of gpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1668
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-20p-POE
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/ultrachat_200k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-20p-POE", "results": []}]} | terry69/llama2-20p-POE | null | [
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-04-27T03:52:39+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us
|
# llama2-20p-POE
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the HuggingFaceH4/ultrachat_200k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | [
"# llama2-20p-POE\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the HuggingFaceH4/ultrachat_200k dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- total_eval_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.39.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us \n",
"# llama2-20p-POE\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the HuggingFaceH4/ultrachat_200k dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 1\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 4\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- total_eval_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.1\n- Transformers 4.39.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2482
- F1 Score: 0.9091
- Accuracy: 0.9090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3502 | 2.17 | 200 | 0.2827 | 0.8935 | 0.8932 |
| 0.2706 | 4.35 | 400 | 0.2674 | 0.8952 | 0.8953 |
| 0.2525 | 6.52 | 600 | 0.2616 | 0.9008 | 0.9008 |
| 0.2382 | 8.7 | 800 | 0.2943 | 0.8818 | 0.8816 |
| 0.2226 | 10.87 | 1000 | 0.2639 | 0.9043 | 0.9042 |
| 0.2091 | 13.04 | 1200 | 0.2804 | 0.8949 | 0.8946 |
| 0.1887 | 15.22 | 1400 | 0.3038 | 0.8875 | 0.8871 |
| 0.1773 | 17.39 | 1600 | 0.2979 | 0.8888 | 0.8884 |
| 0.165 | 19.57 | 1800 | 0.3023 | 0.8877 | 0.8877 |
| 0.1502 | 21.74 | 2000 | 0.3303 | 0.8789 | 0.8789 |
| 0.1388 | 23.91 | 2200 | 0.3254 | 0.8828 | 0.8830 |
| 0.1285 | 26.09 | 2400 | 0.3685 | 0.8817 | 0.8816 |
| 0.1145 | 28.26 | 2600 | 0.3917 | 0.8838 | 0.8843 |
| 0.1043 | 30.43 | 2800 | 0.3995 | 0.8771 | 0.8768 |
| 0.0963 | 32.61 | 3000 | 0.4367 | 0.8736 | 0.8741 |
| 0.0858 | 34.78 | 3200 | 0.4512 | 0.8750 | 0.8754 |
| 0.0828 | 36.96 | 3400 | 0.4695 | 0.8825 | 0.8830 |
| 0.0753 | 39.13 | 3600 | 0.4656 | 0.8689 | 0.8693 |
| 0.0661 | 41.3 | 3800 | 0.5001 | 0.8813 | 0.8816 |
| 0.0574 | 43.48 | 4000 | 0.5272 | 0.8761 | 0.8761 |
| 0.0581 | 45.65 | 4200 | 0.5399 | 0.8658 | 0.8665 |
| 0.0536 | 47.83 | 4400 | 0.5618 | 0.8656 | 0.8658 |
| 0.0504 | 50.0 | 4600 | 0.5276 | 0.8802 | 0.8802 |
| 0.0476 | 52.17 | 4800 | 0.5307 | 0.8687 | 0.8686 |
| 0.0425 | 54.35 | 5000 | 0.5681 | 0.8797 | 0.8795 |
| 0.0391 | 56.52 | 5200 | 0.6236 | 0.8619 | 0.8617 |
| 0.0373 | 58.7 | 5400 | 0.6070 | 0.8816 | 0.8816 |
| 0.0332 | 60.87 | 5600 | 0.6179 | 0.8707 | 0.8706 |
| 0.033 | 63.04 | 5800 | 0.6349 | 0.8721 | 0.8720 |
| 0.0326 | 65.22 | 6000 | 0.6309 | 0.8721 | 0.8720 |
| 0.0308 | 67.39 | 6200 | 0.6272 | 0.8814 | 0.8816 |
| 0.0266 | 69.57 | 6400 | 0.6561 | 0.8706 | 0.8706 |
| 0.0229 | 71.74 | 6600 | 0.6864 | 0.8776 | 0.8775 |
| 0.0264 | 73.91 | 6800 | 0.6644 | 0.8728 | 0.8727 |
| 0.0259 | 76.09 | 7000 | 0.6602 | 0.8836 | 0.8836 |
| 0.0245 | 78.26 | 7200 | 0.6310 | 0.8801 | 0.8802 |
| 0.0195 | 80.43 | 7400 | 0.7108 | 0.8769 | 0.8768 |
| 0.0224 | 82.61 | 7600 | 0.6926 | 0.8801 | 0.8802 |
| 0.0202 | 84.78 | 7800 | 0.7118 | 0.8794 | 0.8795 |
| 0.0179 | 86.96 | 8000 | 0.7417 | 0.8742 | 0.8741 |
| 0.0178 | 89.13 | 8200 | 0.7493 | 0.8802 | 0.8802 |
| 0.02 | 91.3 | 8400 | 0.7425 | 0.8761 | 0.8761 |
| 0.0146 | 93.48 | 8600 | 0.7639 | 0.8749 | 0.8747 |
| 0.0164 | 95.65 | 8800 | 0.7490 | 0.8848 | 0.8850 |
| 0.0156 | 97.83 | 9000 | 0.7522 | 0.8822 | 0.8823 |
| 0.017 | 100.0 | 9200 | 0.7557 | 0.8768 | 0.8768 |
| 0.0155 | 102.17 | 9400 | 0.7471 | 0.8795 | 0.8795 |
| 0.0152 | 104.35 | 9600 | 0.7446 | 0.8788 | 0.8789 |
| 0.0156 | 106.52 | 9800 | 0.7367 | 0.8795 | 0.8795 |
| 0.0157 | 108.7 | 10000 | 0.7382 | 0.8788 | 0.8789 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:53:59+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H4-seqsight\_8192\_512\_30M-L32\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2482
* F1 Score: 0.9091
* Accuracy: 0.9090
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
reinforcement-learning | ml-agents |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: i-pj/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos"]} | i-pj/poca-SoccerTwos | null | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | null | 2024-04-27T03:56:57+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #SoccerTwos #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SoccerTwos #region-us
|
# poca Agent playing SoccerTwos
This is a trained model of a poca agent playing SoccerTwos
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: i-pj/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# poca Agent playing SoccerTwos\n This is a trained model of a poca agent playing SoccerTwos\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: i-pj/poca-SoccerTwos\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #SoccerTwos #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SoccerTwos #region-us \n",
"# poca Agent playing SoccerTwos\n This is a trained model of a poca agent playing SoccerTwos\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: i-pj/poca-SoccerTwos\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3117
- F1 Score: 0.8757
- Accuracy: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4979 | 2.13 | 200 | 0.4474 | 0.7732 | 0.7762 |
| 0.3785 | 4.26 | 400 | 0.3900 | 0.8322 | 0.8323 |
| 0.3503 | 6.38 | 600 | 0.3767 | 0.8443 | 0.8444 |
| 0.3243 | 8.51 | 800 | 0.3637 | 0.8477 | 0.8477 |
| 0.3073 | 10.64 | 1000 | 0.3454 | 0.8537 | 0.8537 |
| 0.292 | 12.77 | 1200 | 0.3486 | 0.8490 | 0.8490 |
| 0.2856 | 14.89 | 1400 | 0.3275 | 0.8597 | 0.8597 |
| 0.2806 | 17.02 | 1600 | 0.3302 | 0.8596 | 0.8597 |
| 0.2738 | 19.15 | 1800 | 0.3483 | 0.8569 | 0.8570 |
| 0.2685 | 21.28 | 2000 | 0.3293 | 0.8664 | 0.8664 |
| 0.2693 | 23.4 | 2200 | 0.3196 | 0.8664 | 0.8664 |
| 0.2562 | 25.53 | 2400 | 0.3518 | 0.8530 | 0.8530 |
| 0.2603 | 27.66 | 2600 | 0.3153 | 0.8671 | 0.8671 |
| 0.261 | 29.79 | 2800 | 0.3262 | 0.8644 | 0.8644 |
| 0.2551 | 31.91 | 3000 | 0.3308 | 0.8631 | 0.8631 |
| 0.2508 | 34.04 | 3200 | 0.3105 | 0.8677 | 0.8677 |
| 0.2504 | 36.17 | 3400 | 0.3317 | 0.8644 | 0.8644 |
| 0.2474 | 38.3 | 3600 | 0.3211 | 0.8684 | 0.8684 |
| 0.2465 | 40.43 | 3800 | 0.3199 | 0.8697 | 0.8697 |
| 0.2447 | 42.55 | 4000 | 0.3468 | 0.8577 | 0.8577 |
| 0.242 | 44.68 | 4200 | 0.3231 | 0.8670 | 0.8671 |
| 0.2395 | 46.81 | 4400 | 0.3210 | 0.8684 | 0.8684 |
| 0.2409 | 48.94 | 4600 | 0.3285 | 0.8650 | 0.8651 |
| 0.2362 | 51.06 | 4800 | 0.3240 | 0.8670 | 0.8671 |
| 0.2354 | 53.19 | 5000 | 0.3370 | 0.8716 | 0.8717 |
| 0.2391 | 55.32 | 5200 | 0.3197 | 0.8677 | 0.8677 |
| 0.2323 | 57.45 | 5400 | 0.3376 | 0.8631 | 0.8631 |
| 0.2301 | 59.57 | 5600 | 0.3173 | 0.8684 | 0.8684 |
| 0.2336 | 61.7 | 5800 | 0.3153 | 0.8671 | 0.8671 |
| 0.2276 | 63.83 | 6000 | 0.3420 | 0.8663 | 0.8664 |
| 0.2287 | 65.96 | 6200 | 0.3250 | 0.8731 | 0.8731 |
| 0.2259 | 68.09 | 6400 | 0.3270 | 0.8731 | 0.8731 |
| 0.2264 | 70.21 | 6600 | 0.3400 | 0.8657 | 0.8657 |
| 0.2263 | 72.34 | 6800 | 0.3203 | 0.8718 | 0.8717 |
| 0.223 | 74.47 | 7000 | 0.3480 | 0.8682 | 0.8684 |
| 0.2205 | 76.6 | 7200 | 0.3297 | 0.8711 | 0.8711 |
| 0.226 | 78.72 | 7400 | 0.3261 | 0.8711 | 0.8711 |
| 0.222 | 80.85 | 7600 | 0.3342 | 0.8664 | 0.8664 |
| 0.2208 | 82.98 | 7800 | 0.3288 | 0.8711 | 0.8711 |
| 0.2211 | 85.11 | 8000 | 0.3224 | 0.8718 | 0.8717 |
| 0.2179 | 87.23 | 8200 | 0.3271 | 0.8711 | 0.8711 |
| 0.2192 | 89.36 | 8400 | 0.3299 | 0.8711 | 0.8711 |
| 0.2202 | 91.49 | 8600 | 0.3340 | 0.8691 | 0.8691 |
| 0.2151 | 93.62 | 8800 | 0.3307 | 0.8717 | 0.8717 |
| 0.2198 | 95.74 | 9000 | 0.3376 | 0.8664 | 0.8664 |
| 0.2138 | 97.87 | 9200 | 0.3277 | 0.8738 | 0.8737 |
| 0.2163 | 100.0 | 9400 | 0.3294 | 0.8704 | 0.8704 |
| 0.2148 | 102.13 | 9600 | 0.3324 | 0.8704 | 0.8704 |
| 0.2144 | 104.26 | 9800 | 0.3316 | 0.8704 | 0.8704 |
| 0.2169 | 106.38 | 10000 | 0.3303 | 0.8711 | 0.8711 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T03:57:07+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3-seqsight\_8192\_512\_30M-L1\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3117
* F1 Score: 0.8757
* Accuracy: 0.8758
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leagaleasy-phi-3-adapter
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "leagaleasy-phi-3-adapter", "results": []}]} | Nithin29/leagaleasy-phi-3-adapter | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-04-27T03:59:59+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us
|
# leagaleasy-phi-3-adapter
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# leagaleasy-phi-3-adapter\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #region-us \n",
"# leagaleasy-phi-3-adapter\n\nThis model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- F1 Score: 0.8784
- Accuracy: 0.8784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4599 | 2.13 | 200 | 0.4099 | 0.8034 | 0.8049 |
| 0.326 | 4.26 | 400 | 0.3549 | 0.8505 | 0.8510 |
| 0.2913 | 6.38 | 600 | 0.3386 | 0.8624 | 0.8624 |
| 0.2751 | 8.51 | 800 | 0.3119 | 0.8744 | 0.8744 |
| 0.2618 | 10.64 | 1000 | 0.3183 | 0.8691 | 0.8691 |
| 0.2539 | 12.77 | 1200 | 0.3306 | 0.8631 | 0.8631 |
| 0.2466 | 14.89 | 1400 | 0.3340 | 0.8697 | 0.8697 |
| 0.2394 | 17.02 | 1600 | 0.3239 | 0.8730 | 0.8731 |
| 0.2341 | 19.15 | 1800 | 0.3410 | 0.8589 | 0.8591 |
| 0.2248 | 21.28 | 2000 | 0.3448 | 0.8684 | 0.8684 |
| 0.2254 | 23.4 | 2200 | 0.3245 | 0.8798 | 0.8798 |
| 0.2104 | 25.53 | 2400 | 0.3476 | 0.8691 | 0.8691 |
| 0.2125 | 27.66 | 2600 | 0.3308 | 0.8724 | 0.8724 |
| 0.2054 | 29.79 | 2800 | 0.3384 | 0.8771 | 0.8771 |
| 0.1984 | 31.91 | 3000 | 0.3369 | 0.8684 | 0.8684 |
| 0.1927 | 34.04 | 3200 | 0.3278 | 0.8811 | 0.8811 |
| 0.1894 | 36.17 | 3400 | 0.3380 | 0.8778 | 0.8778 |
| 0.1846 | 38.3 | 3600 | 0.3533 | 0.8724 | 0.8724 |
| 0.1814 | 40.43 | 3800 | 0.3780 | 0.8669 | 0.8671 |
| 0.1788 | 42.55 | 4000 | 0.3799 | 0.8670 | 0.8671 |
| 0.171 | 44.68 | 4200 | 0.3806 | 0.8670 | 0.8671 |
| 0.1684 | 46.81 | 4400 | 0.3548 | 0.8771 | 0.8771 |
| 0.1676 | 48.94 | 4600 | 0.3834 | 0.8723 | 0.8724 |
| 0.1627 | 51.06 | 4800 | 0.3567 | 0.8784 | 0.8784 |
| 0.1578 | 53.19 | 5000 | 0.3909 | 0.8717 | 0.8717 |
| 0.1618 | 55.32 | 5200 | 0.3847 | 0.8717 | 0.8717 |
| 0.1505 | 57.45 | 5400 | 0.4032 | 0.8717 | 0.8717 |
| 0.1472 | 59.57 | 5600 | 0.3874 | 0.8758 | 0.8758 |
| 0.1467 | 61.7 | 5800 | 0.3742 | 0.8764 | 0.8764 |
| 0.1387 | 63.83 | 6000 | 0.4088 | 0.8811 | 0.8811 |
| 0.1413 | 65.96 | 6200 | 0.4302 | 0.8623 | 0.8624 |
| 0.1385 | 68.09 | 6400 | 0.4217 | 0.8677 | 0.8677 |
| 0.1348 | 70.21 | 6600 | 0.4275 | 0.8710 | 0.8711 |
| 0.1335 | 72.34 | 6800 | 0.3906 | 0.8771 | 0.8771 |
| 0.1308 | 74.47 | 7000 | 0.4620 | 0.8594 | 0.8597 |
| 0.127 | 76.6 | 7200 | 0.4327 | 0.8790 | 0.8791 |
| 0.1308 | 78.72 | 7400 | 0.4144 | 0.8791 | 0.8791 |
| 0.1241 | 80.85 | 7600 | 0.4395 | 0.8704 | 0.8704 |
| 0.1224 | 82.98 | 7800 | 0.4443 | 0.8717 | 0.8717 |
| 0.1235 | 85.11 | 8000 | 0.4423 | 0.8656 | 0.8657 |
| 0.1213 | 87.23 | 8200 | 0.4459 | 0.8690 | 0.8691 |
| 0.1202 | 89.36 | 8400 | 0.4360 | 0.8771 | 0.8771 |
| 0.1186 | 91.49 | 8600 | 0.4519 | 0.8730 | 0.8731 |
| 0.1159 | 93.62 | 8800 | 0.4460 | 0.8724 | 0.8724 |
| 0.1173 | 95.74 | 9000 | 0.4570 | 0.8656 | 0.8657 |
| 0.1129 | 97.87 | 9200 | 0.4473 | 0.8764 | 0.8764 |
| 0.1127 | 100.0 | 9400 | 0.4517 | 0.8737 | 0.8737 |
| 0.1139 | 102.13 | 9600 | 0.4541 | 0.8724 | 0.8724 |
| 0.1124 | 104.26 | 9800 | 0.4552 | 0.8710 | 0.8711 |
| 0.1091 | 106.38 | 10000 | 0.4533 | 0.8744 | 0.8744 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:00:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3-seqsight\_8192\_512\_30M-L8\_f
===========================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3073
* F1 Score: 0.8784
* Accuracy: 0.8784
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5070
- F1 Score: 0.8764
- Accuracy: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4322 | 2.13 | 200 | 0.3501 | 0.8523 | 0.8524 |
| 0.292 | 4.26 | 400 | 0.3385 | 0.8660 | 0.8664 |
| 0.2698 | 6.38 | 600 | 0.3431 | 0.8617 | 0.8617 |
| 0.2532 | 8.51 | 800 | 0.3031 | 0.8757 | 0.8758 |
| 0.2347 | 10.64 | 1000 | 0.3406 | 0.8683 | 0.8684 |
| 0.2237 | 12.77 | 1200 | 0.3251 | 0.8717 | 0.8717 |
| 0.2101 | 14.89 | 1400 | 0.3374 | 0.8744 | 0.8744 |
| 0.2001 | 17.02 | 1600 | 0.3391 | 0.8775 | 0.8778 |
| 0.187 | 19.15 | 1800 | 0.3406 | 0.8711 | 0.8711 |
| 0.1703 | 21.28 | 2000 | 0.3401 | 0.8811 | 0.8811 |
| 0.1702 | 23.4 | 2200 | 0.3899 | 0.8690 | 0.8691 |
| 0.1493 | 25.53 | 2400 | 0.3893 | 0.8744 | 0.8744 |
| 0.145 | 27.66 | 2600 | 0.3886 | 0.8750 | 0.8751 |
| 0.1306 | 29.79 | 2800 | 0.4189 | 0.8682 | 0.8684 |
| 0.1211 | 31.91 | 3000 | 0.4361 | 0.8601 | 0.8604 |
| 0.1078 | 34.04 | 3200 | 0.4087 | 0.8831 | 0.8831 |
| 0.1011 | 36.17 | 3400 | 0.4195 | 0.8824 | 0.8824 |
| 0.0951 | 38.3 | 3600 | 0.4384 | 0.8751 | 0.8751 |
| 0.088 | 40.43 | 3800 | 0.4612 | 0.8723 | 0.8724 |
| 0.0821 | 42.55 | 4000 | 0.5273 | 0.8697 | 0.8697 |
| 0.0781 | 44.68 | 4200 | 0.5045 | 0.8777 | 0.8778 |
| 0.0717 | 46.81 | 4400 | 0.4913 | 0.8778 | 0.8778 |
| 0.0684 | 48.94 | 4600 | 0.5181 | 0.8764 | 0.8764 |
| 0.0634 | 51.06 | 4800 | 0.4860 | 0.8784 | 0.8784 |
| 0.0567 | 53.19 | 5000 | 0.5377 | 0.8744 | 0.8744 |
| 0.0559 | 55.32 | 5200 | 0.5495 | 0.8811 | 0.8811 |
| 0.0509 | 57.45 | 5400 | 0.5644 | 0.8784 | 0.8784 |
| 0.0512 | 59.57 | 5600 | 0.5268 | 0.8824 | 0.8824 |
| 0.0477 | 61.7 | 5800 | 0.5323 | 0.8891 | 0.8891 |
| 0.0463 | 63.83 | 6000 | 0.5887 | 0.8744 | 0.8744 |
| 0.0472 | 65.96 | 6200 | 0.5930 | 0.8771 | 0.8771 |
| 0.0443 | 68.09 | 6400 | 0.5965 | 0.8703 | 0.8704 |
| 0.0365 | 70.21 | 6600 | 0.6416 | 0.8710 | 0.8711 |
| 0.0402 | 72.34 | 6800 | 0.5807 | 0.8838 | 0.8838 |
| 0.0366 | 74.47 | 7000 | 0.6664 | 0.8689 | 0.8691 |
| 0.0352 | 76.6 | 7200 | 0.6275 | 0.8791 | 0.8791 |
| 0.0343 | 78.72 | 7400 | 0.6229 | 0.8831 | 0.8831 |
| 0.0328 | 80.85 | 7600 | 0.6929 | 0.8710 | 0.8711 |
| 0.0281 | 82.98 | 7800 | 0.6863 | 0.8770 | 0.8771 |
| 0.0314 | 85.11 | 8000 | 0.6379 | 0.8764 | 0.8764 |
| 0.0295 | 87.23 | 8200 | 0.6744 | 0.8757 | 0.8758 |
| 0.0268 | 89.36 | 8400 | 0.6775 | 0.8804 | 0.8804 |
| 0.0275 | 91.49 | 8600 | 0.6819 | 0.8804 | 0.8804 |
| 0.0251 | 93.62 | 8800 | 0.6765 | 0.8791 | 0.8791 |
| 0.0243 | 95.74 | 9000 | 0.7077 | 0.8804 | 0.8804 |
| 0.0255 | 97.87 | 9200 | 0.6910 | 0.8797 | 0.8798 |
| 0.0234 | 100.0 | 9400 | 0.6982 | 0.8811 | 0.8811 |
| 0.023 | 102.13 | 9600 | 0.7052 | 0.8750 | 0.8751 |
| 0.0233 | 104.26 | 9800 | 0.6939 | 0.8817 | 0.8818 |
| 0.0229 | 106.38 | 10000 | 0.6918 | 0.8817 | 0.8818 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:00:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3-seqsight\_8192\_512\_30M-L32\_f
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5070
* F1 Score: 0.8764
* Accuracy: 0.8764
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mohamedshaaban2001/llama3_2 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:04:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adapterhappy2sad-study-50-0.003 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:05:14+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF
This model was converted to GGUF format from [`inoutro/phi2-ko-instruction-tune`](https://huggingface.co/inoutro/phi2-ko-instruction-tune) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/inoutro/phi2-ko-instruction-tune) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF --model phi2-ko-instruction-tune.Q2_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF --model phi2-ko-instruction-tune.Q2_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi2-ko-instruction-tune.Q2_K.gguf -n 128
```
| {"language": ["ko"], "license": "cc-by-3.0", "tags": ["llama-cpp", "gguf-my-repo"]} | EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ko",
"license:cc-by-3.0",
"region:us"
] | null | 2024-04-27T04:06:43+00:00 | [] | [
"ko"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #ko #license-cc-by-3.0 #region-us
|
# EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF
This model was converted to GGUF format from 'inoutro/phi2-ko-instruction-tune' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF\nThis model was converted to GGUF format from 'inoutro/phi2-ko-instruction-tune' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #ko #license-cc-by-3.0 #region-us \n",
"# EnverLee/phi2-ko-instruction-tune-Q2_K-GGUF\nThis model was converted to GGUF format from 'inoutro/phi2-ko-instruction-tune' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# M7Meliodaspercival_01_experiment26t3q-7B
M7Meliodaspercival_01_experiment26t3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: liminerity/M7-7b
- model: MaziyarPanahi/MeliodasPercival_01_Experiment26T3q
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/M7Meliodaspercival_01_experiment26t3q-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/M7Meliodaspercival_01_experiment26t3q-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T04:08:37+00:00 | [] | [] | TAGS
#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
|
# M7Meliodaspercival_01_experiment26t3q-7B
M7Meliodaspercival_01_experiment26t3q-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
| [
"# M7Meliodaspercival_01_experiment26t3q-7B\n\nM7Meliodaspercival_01_experiment26t3q-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n",
"# M7Meliodaspercival_01_experiment26t3q-7B\n\nM7Meliodaspercival_01_experiment26t3q-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mohamedshaaban2001/llama3_3 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:11:12+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | tarunabraham1986/code-search-net-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:11:14+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | diffusers | <p align="center">
<img src="https://github.com/JackAILab/ConsistentID/assets/135965025/c0594480-d73d-4268-95ca-5494ca2a61e4" height=20>
</p>
<!-- ## <div align="center"><b>ConsistentID</b></div> -->
<div align="center">
## ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving []()
[📄[Paper](https://arxiv.org/abs/2404.16771)]   [🚩[Project Page](https://ssugarwh.github.io/consistentid.github.io/)]   [🖼[Gradio Demo](http://consistentid.natapp1.cc/)] <br>
</div>
### 🌠 **Key Features:**
1. Portrait generation with extremely high **ID fidelity**, without sacrificing diversity, text controllability.
2. Introducing **FaceParsing** and **FaceID** information into the Diffusion model.
3. Rapid customization **within seconds**, with no additional LoRA training.
4. Can serve as an **Adapter** to collaborate with other Base Models alongside LoRA modules in community.
---
## 🔥 **Examples**
<p align="center">
<img src="https://github.com/JackAILab/ConsistentID/assets/135965025/f949a03d-bed2-4839-a995-7b451d8c981b" height=450>
</p>
## 🚩 To-Do List
Your star will help facilitate the process.
- [x] Release training, evaluation code, and demo!
- [ ] Retrain with more data and the SDXL base model to enhance aesthetics and generalization.
- [ ] Release a multi-ID input version to guide the improvement of ID diversity.
- [ ] Optimize training and inference structures to further improve text following and ID decoupling capabilities.
## 🏷️ Abstract
This is a work in the field of AIGC that introduces FaceParsing information and FaceID information into the Diffusion model. Previous work mainly focused on overall ID preservation, even though fine-grained ID preservation models such as InstantID have recently been proposed, the injection of facial ID features will be fixed. In order to achieve more flexible consistency maintenance of fine-grained IDs for facial features, a batch of 50000 multimodal fine-grained ID datasets was reconstructed for training the proposed FacialEncoder model, which can support common functions such as personalized photos, gender/age changes, and identity confusion.
At the same time, we have defined a unified measurement benchmark FGIS for Fine-Grained Identity Preservice, covering several common facial personalized character scenes and characters, and constructed a fine-grained ID preservation model baseline.
Finally, a large number of experiments were conducted in this article, and ConsistentID achieved the effect of SOTA in facial personalization task processing. It was verified that ConsistentID can improve ID consistency and even modify facial features by selecting finer-grained prompts, which opens up a direction for future research on Fine-Grained facial personalization.
## 🔧 Requirements
To install requirements:
```setup
pip3 install -r requirements.txt
```
## 📦️ Data Preparation
Prepare Data in the following format
├── data
| ├── JSON_all.json
| ├── resize_IMG # Imgaes
| ├── all_faceID # FaceID
| └── parsing_mask_IMG # Parsing Mask
The .json file should be like
```
[
{
"resize_IMG": "Path to resized image...",
"parsing_color_IMG": "...",
"parsing_mask_IMG": "...",
"vqa_llva": "...",
"id_embed_file_resize": "...",
"vqa_llva_more_face_detail": "..."
},
...
]
```
## 🚀 Train
Ensure that the workspace is the root directory of the project.
```setup
bash train_bash.sh
```
## 🧪 Infer
Ensure that the workspace is the root directory of the project.
```setup
python infer.py
```
## ⏬ Model weights
We are hosting the model weights on **huggingface** to achieve a faster and more stable demo experience, so stay tuned ~
The pre-trained model parameters of the model can now be downloaded on [Google Drive](https://drive.google.com/file/d/1jCHICryESmNkzGi8J_FlY3PjJz9gqoSI/view?usp=drive_link) or [Baidu Netdisk](https://pan.baidu.com/s/1NAVmH8S7Ls5rZc-snDk1Ng?pwd=nsh6).
## Acknowledgement
* Inspired from many excellent demos and repos, including [IPAdapter](https://github.com/tencent-ailab/IP-Adapter), [FastComposer](https://github.com/mit-han-lab/fastcomposer), [PhotoMaker](https://github.com/TencentARC/PhotoMaker). Thanks for their great works!
* Thanks to the open source contributions of the following work: [face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch), [LLaVA](https://github.com/haotian-liu/LLaVA), [insightface](https://github.com/deepinsight/insightface), [FFHQ](https://github.com/NVlabs/ffhq-dataset), [CelebA](https://github.com/switchablenorms/CelebAMask-HQ), [SFHQ](https://github.com/SelfishGene/SFHQ-dataset).
* Thanks to the [HuggingFace](https://github.com/huggingface) gradio team for their free GPU support!
## Disclaimer
This project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
## Citation
If you found this code helpful, please consider citing:
~~~
@article{huang2024consistentid,
title={ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving},
author={Huang, Jiehui and Dong, Xiao and Song, Wenhui and Li, Hanhui and Zhou, Jun and Cheng, Yuhao and Liao, Shutao and Chen, Long and Yan, Yiqiang and Liao, Shengcai and others},
journal={arXiv preprint arXiv:2404.16771},
year={2024}
}
~~~
| {"language": ["ak"], "license": "mit", "library_name": "diffusers"} | JackAILab/ConsistentID | null | [
"diffusers",
"ak",
"arxiv:2404.16771",
"license:mit",
"region:us",
"has_space"
] | null | 2024-04-27T04:16:59+00:00 | [
"2404.16771"
] | [
"ak"
] | TAGS
#diffusers #ak #arxiv-2404.16771 #license-mit #region-us #has_space
| <p align="center">
<img src="URL height=20>
</p>
<div align="center">
## ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving ![Paper page]()
[Paper]   [Project Page]   [Gradio Demo] <br>
</div>
### Key Features:
1. Portrait generation with extremely high ID fidelity, without sacrificing diversity, text controllability.
2. Introducing FaceParsing and FaceID information into the Diffusion model.
3. Rapid customization within seconds, with no additional LoRA training.
4. Can serve as an Adapter to collaborate with other Base Models alongside LoRA modules in community.
---
## Examples
<p align="center">
<img src="URL height=450>
</p>
## To-Do List
Your star will help facilitate the process.
- [x] Release training, evaluation code, and demo!
- [ ] Retrain with more data and the SDXL base model to enhance aesthetics and generalization.
- [ ] Release a multi-ID input version to guide the improvement of ID diversity.
- [ ] Optimize training and inference structures to further improve text following and ID decoupling capabilities.
## ️ Abstract
This is a work in the field of AIGC that introduces FaceParsing information and FaceID information into the Diffusion model. Previous work mainly focused on overall ID preservation, even though fine-grained ID preservation models such as InstantID have recently been proposed, the injection of facial ID features will be fixed. In order to achieve more flexible consistency maintenance of fine-grained IDs for facial features, a batch of 50000 multimodal fine-grained ID datasets was reconstructed for training the proposed FacialEncoder model, which can support common functions such as personalized photos, gender/age changes, and identity confusion.
At the same time, we have defined a unified measurement benchmark FGIS for Fine-Grained Identity Preservice, covering several common facial personalized character scenes and characters, and constructed a fine-grained ID preservation model baseline.
Finally, a large number of experiments were conducted in this article, and ConsistentID achieved the effect of SOTA in facial personalization task processing. It was verified that ConsistentID can improve ID consistency and even modify facial features by selecting finer-grained prompts, which opens up a direction for future research on Fine-Grained facial personalization.
## Requirements
To install requirements:
## ️ Data Preparation
Prepare Data in the following format
├── data
| ├── JSON_all.json
| ├── resize_IMG # Imgaes
| ├── all_faceID # FaceID
| └── parsing_mask_IMG # Parsing Mask
The .json file should be like
## Train
Ensure that the workspace is the root directory of the project.
## Infer
Ensure that the workspace is the root directory of the project.
## ⏬ Model weights
We are hosting the model weights on huggingface to achieve a faster and more stable demo experience, so stay tuned ~
The pre-trained model parameters of the model can now be downloaded on Google Drive or Baidu Netdisk.
## Acknowledgement
* Inspired from many excellent demos and repos, including IPAdapter, FastComposer, PhotoMaker. Thanks for their great works!
* Thanks to the open source contributions of the following work: face-parsing.PyTorch, LLaVA, insightface, FFHQ, CelebA, SFHQ.
* Thanks to the HuggingFace gradio team for their free GPU support!
## Disclaimer
This project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
If you found this code helpful, please consider citing:
~~~
@article{huang2024consistentid,
title={ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving},
author={Huang, Jiehui and Dong, Xiao and Song, Wenhui and Li, Hanhui and Zhou, Jun and Cheng, Yuhao and Liao, Shutao and Chen, Long and Yan, Yiqiang and Liao, Shengcai and others},
journal={arXiv preprint arXiv:2404.16771},
year={2024}
}
~~~
| [
"## ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving ![Paper page]()\n[Paper]   [Project Page]   [Gradio Demo] <br>\n\n\n</div>",
"### Key Features:\n\n1. Portrait generation with extremely high ID fidelity, without sacrificing diversity, text controllability.\n2. Introducing FaceParsing and FaceID information into the Diffusion model.\n3. Rapid customization within seconds, with no additional LoRA training.\n4. Can serve as an Adapter to collaborate with other Base Models alongside LoRA modules in community.\n\n---",
"## Examples\n\n<p align=\"center\">\n \n <img src=\"URL height=450>\n\n\n</p>",
"## To-Do List\nYour star will help facilitate the process.\n- [x] Release training, evaluation code, and demo!\n- [ ] Retrain with more data and the SDXL base model to enhance aesthetics and generalization.\n- [ ] Release a multi-ID input version to guide the improvement of ID diversity.\n- [ ] Optimize training and inference structures to further improve text following and ID decoupling capabilities.",
"## ️ Abstract\n\nThis is a work in the field of AIGC that introduces FaceParsing information and FaceID information into the Diffusion model. Previous work mainly focused on overall ID preservation, even though fine-grained ID preservation models such as InstantID have recently been proposed, the injection of facial ID features will be fixed. In order to achieve more flexible consistency maintenance of fine-grained IDs for facial features, a batch of 50000 multimodal fine-grained ID datasets was reconstructed for training the proposed FacialEncoder model, which can support common functions such as personalized photos, gender/age changes, and identity confusion.\n\nAt the same time, we have defined a unified measurement benchmark FGIS for Fine-Grained Identity Preservice, covering several common facial personalized character scenes and characters, and constructed a fine-grained ID preservation model baseline.\n\nFinally, a large number of experiments were conducted in this article, and ConsistentID achieved the effect of SOTA in facial personalization task processing. It was verified that ConsistentID can improve ID consistency and even modify facial features by selecting finer-grained prompts, which opens up a direction for future research on Fine-Grained facial personalization.",
"## Requirements\n\nTo install requirements:",
"## ️ Data Preparation\n\nPrepare Data in the following format\n\n ├── data\n | ├── JSON_all.json \n | ├── resize_IMG # Imgaes \n | ├── all_faceID # FaceID\n | └── parsing_mask_IMG # Parsing Mask \n\nThe .json file should be like",
"## Train\nEnsure that the workspace is the root directory of the project.",
"## Infer\nEnsure that the workspace is the root directory of the project.",
"## ⏬ Model weights\nWe are hosting the model weights on huggingface to achieve a faster and more stable demo experience, so stay tuned ~\n\nThe pre-trained model parameters of the model can now be downloaded on Google Drive or Baidu Netdisk.",
"## Acknowledgement\n* Inspired from many excellent demos and repos, including IPAdapter, FastComposer, PhotoMaker. Thanks for their great works!\n* Thanks to the open source contributions of the following work: face-parsing.PyTorch, LLaVA, insightface, FFHQ, CelebA, SFHQ.\n* Thanks to the HuggingFace gradio team for their free GPU support!",
"## Disclaimer\nThis project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.\n\n\nIf you found this code helpful, please consider citing:\n~~~\n@article{huang2024consistentid,\n title={ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving},\n author={Huang, Jiehui and Dong, Xiao and Song, Wenhui and Li, Hanhui and Zhou, Jun and Cheng, Yuhao and Liao, Shutao and Chen, Long and Yan, Yiqiang and Liao, Shengcai and others},\n journal={arXiv preprint arXiv:2404.16771},\n year={2024}\n}\n~~~"
] | [
"TAGS\n#diffusers #ak #arxiv-2404.16771 #license-mit #region-us #has_space \n",
"## ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving ![Paper page]()\n[Paper]   [Project Page]   [Gradio Demo] <br>\n\n\n</div>",
"### Key Features:\n\n1. Portrait generation with extremely high ID fidelity, without sacrificing diversity, text controllability.\n2. Introducing FaceParsing and FaceID information into the Diffusion model.\n3. Rapid customization within seconds, with no additional LoRA training.\n4. Can serve as an Adapter to collaborate with other Base Models alongside LoRA modules in community.\n\n---",
"## Examples\n\n<p align=\"center\">\n \n <img src=\"URL height=450>\n\n\n</p>",
"## To-Do List\nYour star will help facilitate the process.\n- [x] Release training, evaluation code, and demo!\n- [ ] Retrain with more data and the SDXL base model to enhance aesthetics and generalization.\n- [ ] Release a multi-ID input version to guide the improvement of ID diversity.\n- [ ] Optimize training and inference structures to further improve text following and ID decoupling capabilities.",
"## ️ Abstract\n\nThis is a work in the field of AIGC that introduces FaceParsing information and FaceID information into the Diffusion model. Previous work mainly focused on overall ID preservation, even though fine-grained ID preservation models such as InstantID have recently been proposed, the injection of facial ID features will be fixed. In order to achieve more flexible consistency maintenance of fine-grained IDs for facial features, a batch of 50000 multimodal fine-grained ID datasets was reconstructed for training the proposed FacialEncoder model, which can support common functions such as personalized photos, gender/age changes, and identity confusion.\n\nAt the same time, we have defined a unified measurement benchmark FGIS for Fine-Grained Identity Preservice, covering several common facial personalized character scenes and characters, and constructed a fine-grained ID preservation model baseline.\n\nFinally, a large number of experiments were conducted in this article, and ConsistentID achieved the effect of SOTA in facial personalization task processing. It was verified that ConsistentID can improve ID consistency and even modify facial features by selecting finer-grained prompts, which opens up a direction for future research on Fine-Grained facial personalization.",
"## Requirements\n\nTo install requirements:",
"## ️ Data Preparation\n\nPrepare Data in the following format\n\n ├── data\n | ├── JSON_all.json \n | ├── resize_IMG # Imgaes \n | ├── all_faceID # FaceID\n | └── parsing_mask_IMG # Parsing Mask \n\nThe .json file should be like",
"## Train\nEnsure that the workspace is the root directory of the project.",
"## Infer\nEnsure that the workspace is the root directory of the project.",
"## ⏬ Model weights\nWe are hosting the model weights on huggingface to achieve a faster and more stable demo experience, so stay tuned ~\n\nThe pre-trained model parameters of the model can now be downloaded on Google Drive or Baidu Netdisk.",
"## Acknowledgement\n* Inspired from many excellent demos and repos, including IPAdapter, FastComposer, PhotoMaker. Thanks for their great works!\n* Thanks to the open source contributions of the following work: face-parsing.PyTorch, LLaVA, insightface, FFHQ, CelebA, SFHQ.\n* Thanks to the HuggingFace gradio team for their free GPU support!",
"## Disclaimer\nThis project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.\n\n\nIf you found this code helpful, please consider citing:\n~~~\n@article{huang2024consistentid,\n title={ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving},\n author={Huang, Jiehui and Dong, Xiao and Song, Wenhui and Li, Hanhui and Zhou, Jun and Cheng, Yuhao and Liao, Shutao and Chen, Long and Yan, Yiqiang and Liao, Shengcai and others},\n journal={arXiv preprint arXiv:2404.16771},\n year={2024}\n}\n~~~"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-140
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-140", "results": []}]} | huiang/distilbert-140 | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:17:41+00:00 | [] | [] | TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-140
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# distilbert-140\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-140\n\nThis model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | null |
# chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF
This model was converted to GGUF format from [`Undi95/Llama-3-Unholy-8B-e4`](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Undi95/Llama-3-Unholy-8B-e4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF --model llama-3-unholy-8b-e4.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF --model llama-3-unholy-8b-e4.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-unholy-8b-e4.Q6_K.gguf -n 128
```
| {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"]} | chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-27T04:17:45+00:00 | [] | [] | TAGS
#gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
|
# chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF
This model was converted to GGUF format from 'Undi95/Llama-3-Unholy-8B-e4' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF\nThis model was converted to GGUF format from 'Undi95/Llama-3-Unholy-8B-e4' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n",
"# chenduo/Llama-3-Unholy-8B-e4-Q6_K-GGUF\nThis model was converted to GGUF format from 'Undi95/Llama-3-Unholy-8B-e4' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5487
- F1 Score: 0.7314
- Accuracy: 0.7311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6424 | 0.93 | 200 | 0.5878 | 0.6972 | 0.6971 |
| 0.5937 | 1.87 | 400 | 0.5856 | 0.7043 | 0.7065 |
| 0.5713 | 2.8 | 600 | 0.5549 | 0.7279 | 0.7276 |
| 0.5626 | 3.74 | 800 | 0.5570 | 0.7267 | 0.7267 |
| 0.555 | 4.67 | 1000 | 0.5495 | 0.7334 | 0.7331 |
| 0.5452 | 5.61 | 1200 | 0.5556 | 0.7255 | 0.7258 |
| 0.5456 | 6.54 | 1400 | 0.5529 | 0.7267 | 0.7270 |
| 0.5351 | 7.48 | 1600 | 0.5454 | 0.7384 | 0.7381 |
| 0.5455 | 8.41 | 1800 | 0.5389 | 0.7405 | 0.7402 |
| 0.5363 | 9.35 | 2000 | 0.5550 | 0.7326 | 0.7331 |
| 0.5308 | 10.28 | 2200 | 0.5420 | 0.7408 | 0.7405 |
| 0.5319 | 11.21 | 2400 | 0.5461 | 0.7348 | 0.7349 |
| 0.5286 | 12.15 | 2600 | 0.5469 | 0.7356 | 0.7358 |
| 0.5256 | 13.08 | 2800 | 0.5435 | 0.7420 | 0.7419 |
| 0.5265 | 14.02 | 3000 | 0.5393 | 0.7364 | 0.7361 |
| 0.5246 | 14.95 | 3200 | 0.5433 | 0.7377 | 0.7378 |
| 0.5214 | 15.89 | 3400 | 0.5467 | 0.7387 | 0.7390 |
| 0.5192 | 16.82 | 3600 | 0.5376 | 0.7384 | 0.7381 |
| 0.5221 | 17.76 | 3800 | 0.5390 | 0.7429 | 0.7428 |
| 0.5194 | 18.69 | 4000 | 0.5362 | 0.7425 | 0.7422 |
| 0.5146 | 19.63 | 4200 | 0.5428 | 0.7435 | 0.7437 |
| 0.5169 | 20.56 | 4400 | 0.5344 | 0.7478 | 0.7475 |
| 0.5137 | 21.5 | 4600 | 0.5554 | 0.7331 | 0.7340 |
| 0.5135 | 22.43 | 4800 | 0.5325 | 0.7403 | 0.7402 |
| 0.512 | 23.36 | 5000 | 0.5467 | 0.7451 | 0.7455 |
| 0.5143 | 24.3 | 5200 | 0.5323 | 0.7452 | 0.7449 |
| 0.5114 | 25.23 | 5400 | 0.5372 | 0.7443 | 0.7440 |
| 0.5119 | 26.17 | 5600 | 0.5342 | 0.7431 | 0.7428 |
| 0.5076 | 27.1 | 5800 | 0.5323 | 0.7481 | 0.7478 |
| 0.5033 | 28.04 | 6000 | 0.5375 | 0.7481 | 0.7478 |
| 0.5092 | 28.97 | 6200 | 0.5409 | 0.7431 | 0.7431 |
| 0.5087 | 29.91 | 6400 | 0.5336 | 0.7446 | 0.7443 |
| 0.5068 | 30.84 | 6600 | 0.5447 | 0.7414 | 0.7416 |
| 0.5039 | 31.78 | 6800 | 0.5335 | 0.7463 | 0.7460 |
| 0.5055 | 32.71 | 7000 | 0.5344 | 0.7475 | 0.7472 |
| 0.5019 | 33.64 | 7200 | 0.5390 | 0.7437 | 0.7437 |
| 0.5028 | 34.58 | 7400 | 0.5360 | 0.7457 | 0.7455 |
| 0.5044 | 35.51 | 7600 | 0.5333 | 0.7454 | 0.7452 |
| 0.4999 | 36.45 | 7800 | 0.5364 | 0.7469 | 0.7466 |
| 0.5038 | 37.38 | 8000 | 0.5428 | 0.7413 | 0.7413 |
| 0.5013 | 38.32 | 8200 | 0.5369 | 0.7454 | 0.7452 |
| 0.4995 | 39.25 | 8400 | 0.5346 | 0.7478 | 0.7475 |
| 0.5054 | 40.19 | 8600 | 0.5328 | 0.7440 | 0.7437 |
| 0.5004 | 41.12 | 8800 | 0.5360 | 0.7460 | 0.7457 |
| 0.5004 | 42.06 | 9000 | 0.5351 | 0.7478 | 0.7475 |
| 0.4999 | 42.99 | 9200 | 0.5401 | 0.7447 | 0.7446 |
| 0.4998 | 43.93 | 9400 | 0.5380 | 0.7471 | 0.7469 |
| 0.4988 | 44.86 | 9600 | 0.5360 | 0.7490 | 0.7487 |
| 0.5002 | 45.79 | 9800 | 0.5367 | 0.7463 | 0.7460 |
| 0.5007 | 46.73 | 10000 | 0.5374 | 0.7471 | 0.7469 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:18:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H4ac-seqsight\_8192\_512\_30M-L1\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5487
* F1 Score: 0.7314
* Accuracy: 0.7311
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test6.2-8B-18`](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.2-8B-18) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.2-8B-18) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF --model keiana-l3-test6.2-8b-18.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF --model keiana-l3-test6.2-8b-18.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test6.2-8b-18.Q6_K.gguf -n 128
```
| {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Keiana-L3-Test6-8B-16", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "Kaoeiri/Keiana-L3-Test6-8B-16"]} | Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test5.4-8B-10",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"Kaoeiri/Keiana-L3-Test6-8B-16",
"llama-cpp",
"gguf-my-repo",
"base_model:Kaoeiri/Keiana-L3-Test5.4-8B-10",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"base_model:Kaoeiri/Keiana-L3-Test6-8B-16",
"region:us"
] | null | 2024-04-27T04:19:01+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.4-8B-10 #Kaoeiri/Keiana-L3-Test4.7-8B-3 #Kaoeiri/Keiana-L3-Test6-8B-16 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test5.4-8B-10 #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #base_model-Kaoeiri/Keiana-L3-Test6-8B-16 #region-us
|
# Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF
This model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test6.2-8B-18' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test6.2-8B-18' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.4-8B-10 #Kaoeiri/Keiana-L3-Test4.7-8B-3 #Kaoeiri/Keiana-L3-Test6-8B-16 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test5.4-8B-10 #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #base_model-Kaoeiri/Keiana-L3-Test6-8B-16 #region-us \n",
"# Kaoeiri/Keiana-L3-Test6.2-8B-18-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test6.2-8B-18' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null | # Phi-3-mini-128k-instruct

## Requisitos
Para usar este modelo, necesitas tener instalado llama.cpp en tu equipo. Puedes obtener llama.cpp desde el siguiente repositorio:
- [Repositorio de llama.cpp](https://github.com/ggerganov/llama.cpp)
Para instalar llama.cpp, sigue estos pasos:
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
## Uso del modelo
La plantilla del modelo es la siguiente:
```plaintext
<|user|>\n{prompt} <|end|>\n<|assistant|>
```
Puedes utilizar el modelo en llama.cpp con el siguiente comando:
```bash
./main -m ggml-model-Q8_0.gguf -p "<|user|>\n¿Cómo te llamas? <|end|>\n<|assistant|>" --log-disable
```
LM Studio config-presets
Filename:phi-3.preset.json
```json
{
"name": "Phi-3",
"inference_params": {
"input_prefix": "<|user|>\n",
"input_suffix": "<|end|>\n<|assistant|>",
"antiprompt": [
"<|user|>\n",
"<|end|>\n<|assistant|>"
],
"pre_prompt": "<|system|>\nYou are a helpful AI assistant.<|end|>",
"pre_prompt_prefix": "",
"pre_prompt_suffix": ""
},
"load_params": {
"rope_freq_scale": 0,
"rope_freq_base": 0
}
}
```
## Referencias
- [Repositorio original](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
- [Repositorio de llama.cpp](https://github.com/ggerganov/llama.cpp) | {"language": ["es", "en"], "tags": ["gguf", "llama.cpp", "phi-3", "phi-3-mini", "128k", "phi-3-mini-128k"]} | HirCoir/Phi-3-mini-4k-instruct-gguf | null | [
"gguf",
"llama.cpp",
"phi-3",
"phi-3-mini",
"128k",
"phi-3-mini-128k",
"es",
"en",
"region:us"
] | null | 2024-04-27T04:19:14+00:00 | [] | [
"es",
"en"
] | TAGS
#gguf #llama.cpp #phi-3 #phi-3-mini #128k #phi-3-mini-128k #es #en #region-us
| # Phi-3-mini-128k-instruct
!Image
## Requisitos
Para usar este modelo, necesitas tener instalado URL en tu equipo. Puedes obtener URL desde el siguiente repositorio:
- Repositorio de URL
Para instalar URL, sigue estos pasos:
## Uso del modelo
La plantilla del modelo es la siguiente:
Puedes utilizar el modelo en URL con el siguiente comando:
LM Studio config-presets
Filename:URL
## Referencias
- Repositorio original
- Repositorio de URL | [
"# Phi-3-mini-128k-instruct\n!Image",
"## Requisitos\n\nPara usar este modelo, necesitas tener instalado URL en tu equipo. Puedes obtener URL desde el siguiente repositorio:\n\n- Repositorio de URL\n\nPara instalar URL, sigue estos pasos:",
"## Uso del modelo\n\nLa plantilla del modelo es la siguiente:\n\n\n\nPuedes utilizar el modelo en URL con el siguiente comando:\n\n\n\nLM Studio config-presets\n\nFilename:URL",
"## Referencias\n\n- Repositorio original\n- Repositorio de URL"
] | [
"TAGS\n#gguf #llama.cpp #phi-3 #phi-3-mini #128k #phi-3-mini-128k #es #en #region-us \n",
"# Phi-3-mini-128k-instruct\n!Image",
"## Requisitos\n\nPara usar este modelo, necesitas tener instalado URL en tu equipo. Puedes obtener URL desde el siguiente repositorio:\n\n- Repositorio de URL\n\nPara instalar URL, sigue estos pasos:",
"## Uso del modelo\n\nLa plantilla del modelo es la siguiente:\n\n\n\nPuedes utilizar el modelo en URL con el siguiente comando:\n\n\n\nLM Studio config-presets\n\nFilename:URL",
"## Referencias\n\n- Repositorio original\n- Repositorio de URL"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1951
- Bleu: 0.2003
- Gen Len: 18.1916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.6492 | 1.0 | 1617 | 3.2786 | 0.1589 | 18.21 |
| 3.5126 | 2.0 | 3234 | 3.1951 | 0.2003 | 18.1916 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | WillXH/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:21:39+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1951
* Bleu: 0.2003
* Gen Len: 18.1916
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5588
- F1 Score: 0.7340
- Accuracy: 0.7337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6202 | 0.93 | 200 | 0.5723 | 0.7236 | 0.7235 |
| 0.5647 | 1.87 | 400 | 0.5588 | 0.7257 | 0.7261 |
| 0.5465 | 2.8 | 600 | 0.5437 | 0.7375 | 0.7372 |
| 0.538 | 3.74 | 800 | 0.5386 | 0.7460 | 0.7457 |
| 0.5327 | 4.67 | 1000 | 0.5372 | 0.7410 | 0.7408 |
| 0.5206 | 5.61 | 1200 | 0.5462 | 0.7338 | 0.7343 |
| 0.5204 | 6.54 | 1400 | 0.5584 | 0.7314 | 0.7328 |
| 0.5069 | 7.48 | 1600 | 0.5359 | 0.7451 | 0.7449 |
| 0.5151 | 8.41 | 1800 | 0.5314 | 0.7425 | 0.7422 |
| 0.5056 | 9.35 | 2000 | 0.5400 | 0.7448 | 0.7446 |
| 0.5006 | 10.28 | 2200 | 0.5304 | 0.7460 | 0.7463 |
| 0.5004 | 11.21 | 2400 | 0.5401 | 0.7406 | 0.7405 |
| 0.4948 | 12.15 | 2600 | 0.5606 | 0.7377 | 0.7387 |
| 0.491 | 13.08 | 2800 | 0.5412 | 0.7367 | 0.7364 |
| 0.4902 | 14.02 | 3000 | 0.5359 | 0.7466 | 0.7463 |
| 0.4866 | 14.95 | 3200 | 0.5357 | 0.7442 | 0.7440 |
| 0.4826 | 15.89 | 3400 | 0.5392 | 0.7481 | 0.7478 |
| 0.4796 | 16.82 | 3600 | 0.5472 | 0.7441 | 0.7440 |
| 0.4801 | 17.76 | 3800 | 0.5762 | 0.7279 | 0.7302 |
| 0.4779 | 18.69 | 4000 | 0.5459 | 0.7463 | 0.7460 |
| 0.4724 | 19.63 | 4200 | 0.5413 | 0.7453 | 0.7452 |
| 0.4716 | 20.56 | 4400 | 0.5350 | 0.7493 | 0.7490 |
| 0.4689 | 21.5 | 4600 | 0.5510 | 0.7428 | 0.7431 |
| 0.4643 | 22.43 | 4800 | 0.5387 | 0.7445 | 0.7446 |
| 0.4655 | 23.36 | 5000 | 0.5401 | 0.7493 | 0.7490 |
| 0.4668 | 24.3 | 5200 | 0.5416 | 0.7490 | 0.7487 |
| 0.4607 | 25.23 | 5400 | 0.5412 | 0.7460 | 0.7457 |
| 0.4608 | 26.17 | 5600 | 0.5418 | 0.7459 | 0.7457 |
| 0.4556 | 27.1 | 5800 | 0.5428 | 0.7419 | 0.7416 |
| 0.4486 | 28.04 | 6000 | 0.5541 | 0.7498 | 0.7496 |
| 0.4544 | 28.97 | 6200 | 0.5575 | 0.7483 | 0.7481 |
| 0.4553 | 29.91 | 6400 | 0.5399 | 0.7469 | 0.7466 |
| 0.4504 | 30.84 | 6600 | 0.5560 | 0.7513 | 0.7510 |
| 0.4475 | 31.78 | 6800 | 0.5508 | 0.7504 | 0.7501 |
| 0.4495 | 32.71 | 7000 | 0.5533 | 0.7490 | 0.7487 |
| 0.4451 | 33.64 | 7200 | 0.5597 | 0.7455 | 0.7455 |
| 0.4438 | 34.58 | 7400 | 0.5496 | 0.7498 | 0.7496 |
| 0.4421 | 35.51 | 7600 | 0.5490 | 0.7478 | 0.7475 |
| 0.438 | 36.45 | 7800 | 0.5653 | 0.7490 | 0.7487 |
| 0.4441 | 37.38 | 8000 | 0.5585 | 0.7489 | 0.7487 |
| 0.4371 | 38.32 | 8200 | 0.5524 | 0.7469 | 0.7466 |
| 0.4376 | 39.25 | 8400 | 0.5513 | 0.7492 | 0.7490 |
| 0.4436 | 40.19 | 8600 | 0.5530 | 0.7493 | 0.7490 |
| 0.4405 | 41.12 | 8800 | 0.5508 | 0.7516 | 0.7513 |
| 0.4346 | 42.06 | 9000 | 0.5584 | 0.7504 | 0.7501 |
| 0.4356 | 42.99 | 9200 | 0.5598 | 0.7496 | 0.7493 |
| 0.4359 | 43.93 | 9400 | 0.5575 | 0.7510 | 0.7507 |
| 0.4328 | 44.86 | 9600 | 0.5574 | 0.7507 | 0.7504 |
| 0.4369 | 45.79 | 9800 | 0.5555 | 0.7493 | 0.7490 |
| 0.4348 | 46.73 | 10000 | 0.5572 | 0.7502 | 0.7499 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:22:00+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H4ac-seqsight\_8192\_512\_30M-L8\_f
=============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5588
* F1 Score: 0.7340
* Accuracy: 0.7337
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5937
- F1 Score: 0.7363
- Accuracy: 0.7361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6064 | 0.93 | 200 | 0.5629 | 0.7249 | 0.7246 |
| 0.5531 | 1.87 | 400 | 0.5469 | 0.7386 | 0.7387 |
| 0.532 | 2.8 | 600 | 0.5376 | 0.7449 | 0.7446 |
| 0.5194 | 3.74 | 800 | 0.5275 | 0.7454 | 0.7452 |
| 0.5127 | 4.67 | 1000 | 0.5259 | 0.7445 | 0.7446 |
| 0.4997 | 5.61 | 1200 | 0.5377 | 0.7416 | 0.7416 |
| 0.4956 | 6.54 | 1400 | 0.5522 | 0.7401 | 0.7411 |
| 0.4804 | 7.48 | 1600 | 0.5274 | 0.7466 | 0.7463 |
| 0.4831 | 8.41 | 1800 | 0.5284 | 0.7478 | 0.7475 |
| 0.4717 | 9.35 | 2000 | 0.5305 | 0.7507 | 0.7504 |
| 0.465 | 10.28 | 2200 | 0.5422 | 0.7493 | 0.7493 |
| 0.4626 | 11.21 | 2400 | 0.5528 | 0.7438 | 0.7443 |
| 0.4551 | 12.15 | 2600 | 0.5676 | 0.7451 | 0.7457 |
| 0.4492 | 13.08 | 2800 | 0.5460 | 0.7502 | 0.7499 |
| 0.4427 | 14.02 | 3000 | 0.5675 | 0.7476 | 0.7475 |
| 0.4361 | 14.95 | 3200 | 0.5767 | 0.7383 | 0.7384 |
| 0.4312 | 15.89 | 3400 | 0.5419 | 0.7498 | 0.7496 |
| 0.4218 | 16.82 | 3600 | 0.5600 | 0.7355 | 0.7352 |
| 0.4215 | 17.76 | 3800 | 0.6142 | 0.7290 | 0.7320 |
| 0.4137 | 18.69 | 4000 | 0.5556 | 0.7472 | 0.7469 |
| 0.4083 | 19.63 | 4200 | 0.5550 | 0.7419 | 0.7416 |
| 0.4027 | 20.56 | 4400 | 0.5663 | 0.7419 | 0.7416 |
| 0.395 | 21.5 | 4600 | 0.5728 | 0.7406 | 0.7405 |
| 0.3889 | 22.43 | 4800 | 0.5705 | 0.7500 | 0.7499 |
| 0.3868 | 23.36 | 5000 | 0.5718 | 0.7516 | 0.7513 |
| 0.3831 | 24.3 | 5200 | 0.5898 | 0.7428 | 0.7425 |
| 0.3745 | 25.23 | 5400 | 0.5969 | 0.7466 | 0.7463 |
| 0.3714 | 26.17 | 5600 | 0.6069 | 0.7493 | 0.7490 |
| 0.3632 | 27.1 | 5800 | 0.6047 | 0.7416 | 0.7416 |
| 0.3562 | 28.04 | 6000 | 0.6131 | 0.7460 | 0.7457 |
| 0.3579 | 28.97 | 6200 | 0.6060 | 0.7448 | 0.7446 |
| 0.3554 | 29.91 | 6400 | 0.5947 | 0.7417 | 0.7413 |
| 0.3493 | 30.84 | 6600 | 0.6164 | 0.7451 | 0.7449 |
| 0.3429 | 31.78 | 6800 | 0.6179 | 0.7437 | 0.7434 |
| 0.3424 | 32.71 | 7000 | 0.6248 | 0.7466 | 0.7463 |
| 0.3384 | 33.64 | 7200 | 0.6480 | 0.7419 | 0.7419 |
| 0.3338 | 34.58 | 7400 | 0.6411 | 0.7422 | 0.7422 |
| 0.3312 | 35.51 | 7600 | 0.6297 | 0.7408 | 0.7408 |
| 0.3251 | 36.45 | 7800 | 0.6505 | 0.7425 | 0.7425 |
| 0.3277 | 37.38 | 8000 | 0.6475 | 0.7431 | 0.7428 |
| 0.3225 | 38.32 | 8200 | 0.6437 | 0.7437 | 0.7434 |
| 0.3162 | 39.25 | 8400 | 0.6590 | 0.7428 | 0.7425 |
| 0.3209 | 40.19 | 8600 | 0.6614 | 0.7436 | 0.7434 |
| 0.3163 | 41.12 | 8800 | 0.6600 | 0.7431 | 0.7431 |
| 0.314 | 42.06 | 9000 | 0.6631 | 0.7478 | 0.7475 |
| 0.3126 | 42.99 | 9200 | 0.6703 | 0.7438 | 0.7437 |
| 0.3105 | 43.93 | 9400 | 0.6644 | 0.7456 | 0.7455 |
| 0.3083 | 44.86 | 9600 | 0.6638 | 0.7457 | 0.7455 |
| 0.3069 | 45.79 | 9800 | 0.6666 | 0.7448 | 0.7446 |
| 0.3061 | 46.73 | 10000 | 0.6685 | 0.7433 | 0.7431 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:22:13+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H4ac-seqsight\_8192\_512\_30M-L32\_f
==============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5937
* F1 Score: 0.7363
* Accuracy: 0.7361
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test5.2-8B-8`](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.2-8B-8) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.2-8B-8) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF --model keiana-l3-test5.2-8b-8.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF --model keiana-l3-test5.2-8b-8.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test5.2-8b-8.Q6_K.gguf -n 128
```
| {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "DevsDoCode/LLama-3-8b-Uncensored", "Orenguteng/Llama-3-8B-Lexi-Uncensored", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test4.7-8B-3", "DevsDoCode/LLama-3-8b-Uncensored", "Orenguteng/Llama-3-8B-Lexi-Uncensored"]} | Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"DevsDoCode/LLama-3-8b-Uncensored",
"Orenguteng/Llama-3-8B-Lexi-Uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"base_model:DevsDoCode/LLama-3-8b-Uncensored",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"region:us"
] | null | 2024-04-27T04:22:20+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.7-8B-3 #DevsDoCode/LLama-3-8b-Uncensored #Orenguteng/Llama-3-8B-Lexi-Uncensored #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #base_model-DevsDoCode/LLama-3-8b-Uncensored #base_model-Orenguteng/Llama-3-8B-Lexi-Uncensored #region-us
|
# Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF
This model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.2-8B-8' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.2-8B-8' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test4.7-8B-3 #DevsDoCode/LLama-3-8b-Uncensored #Orenguteng/Llama-3-8B-Lexi-Uncensored #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #base_model-DevsDoCode/LLama-3-8b-Uncensored #base_model-Orenguteng/Llama-3-8B-Lexi-Uncensored #region-us \n",
"# Kaoeiri/Keiana-L3-Test5.2-8B-8-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.2-8B-8' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test4.7-8B-3`](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF --model keiana-l3-test4.7-8b-3.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF --model keiana-l3-test4.7-8b-3.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test4.7-8b-3.Q6_K.gguf -n 128
```
| {"tags": ["merge", "mergekit", "lazymergekit", "jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2", "llama-cpp", "gguf-my-repo"], "base_model": ["jeiku/Average_Normie_l3_v1_8B", "Kaoeiri/Keiana-L3-Test4.6-8B-2"]} | Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"jeiku/Average_Normie_l3_v1_8B",
"Kaoeiri/Keiana-L3-Test4.6-8B-2",
"llama-cpp",
"gguf-my-repo",
"base_model:jeiku/Average_Normie_l3_v1_8B",
"base_model:Kaoeiri/Keiana-L3-Test4.6-8B-2",
"region:us"
] | null | 2024-04-27T04:24:45+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #jeiku/Average_Normie_l3_v1_8B #Kaoeiri/Keiana-L3-Test4.6-8B-2 #llama-cpp #gguf-my-repo #base_model-jeiku/Average_Normie_l3_v1_8B #base_model-Kaoeiri/Keiana-L3-Test4.6-8B-2 #region-us
|
# Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF
This model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test4.7-8B-3' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test4.7-8B-3' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #jeiku/Average_Normie_l3_v1_8B #Kaoeiri/Keiana-L3-Test4.6-8B-2 #llama-cpp #gguf-my-repo #base_model-jeiku/Average_Normie_l3_v1_8B #base_model-Kaoeiri/Keiana-L3-Test4.6-8B-2 #region-us \n",
"# Kaoeiri/Keiana-L3-Test4.7-8B-3-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test4.7-8B-3' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shawgpt-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.5944 | 0.9231 | 3 | 3.9701 |
| 4.0554 | 1.8462 | 6 | 3.4516 |
| 3.4854 | 2.7692 | 9 | 3.0035 |
| 2.2744 | 4.0 | 13 | 2.5726 |
| 2.6881 | 4.9231 | 16 | 2.3152 |
| 2.3667 | 5.8462 | 19 | 2.1328 |
| 2.1502 | 6.7692 | 22 | 1.9922 |
| 1.5481 | 8.0 | 26 | 1.9571 |
| 2.0213 | 8.9231 | 29 | 1.9166 |
| 1.3996 | 9.2308 | 30 | 1.9042 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "shawgpt-ft", "results": []}]} | Jerry-Qiu/shawgpt-ft | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-27T04:25:33+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
| shawgpt-ft
==========
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9042
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.1.0+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.1.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | null | Zipped Version of
https://huggingface.co/datasets/gvecchio/MatSynth | {"license": "cc0-1.0"} | NightRaven109/MatsynthCC0Zipped | null | [
"license:cc0-1.0",
"region:us"
] | null | 2024-04-27T04:27:36+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
| Zipped Version of
URL | [] | [
"TAGS\n#license-cc0-1.0 #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_iter_1", "results": []}]} | ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:28:35+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs128_nodpo_only4w_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs128_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs128_nodpo_only4w_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4316
- F1 Score: 0.8132
- Accuracy: 0.8138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5352 | 1.1 | 200 | 0.4690 | 0.7952 | 0.7954 |
| 0.4694 | 2.21 | 400 | 0.4722 | 0.7901 | 0.7926 |
| 0.4575 | 3.31 | 600 | 0.4560 | 0.7969 | 0.7989 |
| 0.4479 | 4.42 | 800 | 0.4505 | 0.7999 | 0.8013 |
| 0.4465 | 5.52 | 1000 | 0.4660 | 0.7970 | 0.7996 |
| 0.4395 | 6.63 | 1200 | 0.4627 | 0.7932 | 0.7958 |
| 0.4435 | 7.73 | 1400 | 0.4453 | 0.7982 | 0.7996 |
| 0.4352 | 8.84 | 1600 | 0.4641 | 0.7974 | 0.7999 |
| 0.4361 | 9.94 | 1800 | 0.4368 | 0.8123 | 0.8124 |
| 0.4324 | 11.05 | 2000 | 0.4510 | 0.7997 | 0.8013 |
| 0.4324 | 12.15 | 2200 | 0.4404 | 0.8069 | 0.8079 |
| 0.4257 | 13.26 | 2400 | 0.4469 | 0.8022 | 0.8037 |
| 0.4249 | 14.36 | 2600 | 0.4371 | 0.8083 | 0.8089 |
| 0.4263 | 15.47 | 2800 | 0.4491 | 0.7978 | 0.7999 |
| 0.4245 | 16.57 | 3000 | 0.4368 | 0.8084 | 0.8086 |
| 0.4236 | 17.68 | 3200 | 0.4374 | 0.8021 | 0.8031 |
| 0.4198 | 18.78 | 3400 | 0.4357 | 0.8062 | 0.8069 |
| 0.4188 | 19.89 | 3600 | 0.4417 | 0.8035 | 0.8051 |
| 0.4196 | 20.99 | 3800 | 0.4429 | 0.8041 | 0.8055 |
| 0.4185 | 22.1 | 4000 | 0.4345 | 0.8073 | 0.8086 |
| 0.4156 | 23.2 | 4200 | 0.4369 | 0.8083 | 0.8093 |
| 0.4174 | 24.31 | 4400 | 0.4499 | 0.8046 | 0.8065 |
| 0.41 | 25.41 | 4600 | 0.4421 | 0.8105 | 0.8117 |
| 0.4161 | 26.52 | 4800 | 0.4367 | 0.8090 | 0.8100 |
| 0.4151 | 27.62 | 5000 | 0.4402 | 0.8061 | 0.8076 |
| 0.4116 | 28.73 | 5200 | 0.4370 | 0.8052 | 0.8069 |
| 0.4073 | 29.83 | 5400 | 0.4342 | 0.8116 | 0.8124 |
| 0.4084 | 30.94 | 5600 | 0.4343 | 0.8111 | 0.8121 |
| 0.4099 | 32.04 | 5800 | 0.4295 | 0.8134 | 0.8138 |
| 0.4065 | 33.15 | 6000 | 0.4322 | 0.8105 | 0.8114 |
| 0.4066 | 34.25 | 6200 | 0.4361 | 0.8091 | 0.8100 |
| 0.406 | 35.36 | 6400 | 0.4366 | 0.8113 | 0.8124 |
| 0.4067 | 36.46 | 6600 | 0.4307 | 0.8151 | 0.8155 |
| 0.4074 | 37.57 | 6800 | 0.4384 | 0.8073 | 0.8086 |
| 0.4043 | 38.67 | 7000 | 0.4383 | 0.8102 | 0.8114 |
| 0.4037 | 39.78 | 7200 | 0.4360 | 0.8107 | 0.8117 |
| 0.4066 | 40.88 | 7400 | 0.4349 | 0.8115 | 0.8124 |
| 0.4065 | 41.99 | 7600 | 0.4334 | 0.8115 | 0.8124 |
| 0.4026 | 43.09 | 7800 | 0.4390 | 0.8109 | 0.8121 |
| 0.4048 | 44.2 | 8000 | 0.4384 | 0.8077 | 0.8089 |
| 0.4013 | 45.3 | 8200 | 0.4334 | 0.8133 | 0.8141 |
| 0.4039 | 46.41 | 8400 | 0.4322 | 0.8127 | 0.8135 |
| 0.4055 | 47.51 | 8600 | 0.4366 | 0.8119 | 0.8131 |
| 0.3996 | 48.62 | 8800 | 0.4373 | 0.8102 | 0.8114 |
| 0.3991 | 49.72 | 9000 | 0.4363 | 0.8103 | 0.8114 |
| 0.4059 | 50.83 | 9200 | 0.4392 | 0.8103 | 0.8117 |
| 0.4004 | 51.93 | 9400 | 0.4362 | 0.8103 | 0.8114 |
| 0.4009 | 53.04 | 9600 | 0.4354 | 0.8111 | 0.8121 |
| 0.3991 | 54.14 | 9800 | 0.4346 | 0.8122 | 0.8131 |
| 0.3994 | 55.25 | 10000 | 0.4364 | 0.8103 | 0.8114 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:29:25+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_8192\_512\_30M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4316
* F1 Score: 0.8132
* Accuracy: 0.8138
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test6.1-8B-17`](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.1-8B-17) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test6.1-8B-17) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF --model keiana-l3-test6.1-8b-17.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF --model keiana-l3-test6.1-8b-17.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test6.1-8b-17.Q6_K.gguf -n 128
```
| {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test6-8B-16", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test5.4-8B-10", "Kaoeiri/Keiana-L3-Test6-8B-16"]} | Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test5.4-8B-10",
"Kaoeiri/Keiana-L3-Test6-8B-16",
"llama-cpp",
"gguf-my-repo",
"base_model:Kaoeiri/Keiana-L3-Test5.4-8B-10",
"base_model:Kaoeiri/Keiana-L3-Test6-8B-16",
"region:us"
] | null | 2024-04-27T04:29:26+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.4-8B-10 #Kaoeiri/Keiana-L3-Test6-8B-16 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test5.4-8B-10 #base_model-Kaoeiri/Keiana-L3-Test6-8B-16 #region-us
|
# Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF
This model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test6.1-8B-17' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test6.1-8B-17' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.4-8B-10 #Kaoeiri/Keiana-L3-Test6-8B-16 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test5.4-8B-10 #base_model-Kaoeiri/Keiana-L3-Test6-8B-16 #region-us \n",
"# Kaoeiri/Keiana-L3-Test6.1-8B-17-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test6.1-8B-17' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
reinforcement-learning | sample-factory |
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r UXAIR/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
| {"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "12.30 +/- 4.46", "name": "mean_reward", "verified": false}]}]}]} | UXAIR/rl_course_vizdoom_health_gathering_supreme | null | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-27T04:31:04+00:00 | [] | [] | TAGS
#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
A(n) APPO model trained on the doom_health_gathering_supreme environment.
This model was trained using Sample-Factory 2.0: URL
Documentation for how to use Sample-Factory can be found at URL
## Downloading the model
After installing Sample-Factory, download the model with:
## Using the model
To run the model after download, use the 'enjoy' script corresponding to this environment:
You can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.
See URL for more details
## Training with this model
To continue training with this model, use the 'train' script corresponding to this environment:
Note, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at.
| [
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] | [
"TAGS\n#sample-factory #tensorboard #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"## Downloading the model\n\nAfter installing Sample-Factory, download the model with:",
"## Using the model\n\nTo run the model after download, use the 'enjoy' script corresponding to this environment:\n\n\n\nYou can also upload models to the Hugging Face Hub using the same script with the '--push_to_hub' flag.\nSee URL for more details",
"## Training with this model\n\nTo continue training with this model, use the 'train' script corresponding to this environment:\n\n\nNote, you may have to adjust '--train_for_env_steps' to a suitably high number as the experiment will resume at the number of steps it concluded at."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-filtered-50-0.006 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:31:48+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4375
- F1 Score: 0.8244
- Accuracy: 0.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5123 | 1.1 | 200 | 0.4536 | 0.8040 | 0.8041 |
| 0.4564 | 2.21 | 400 | 0.4467 | 0.8047 | 0.8058 |
| 0.446 | 3.31 | 600 | 0.4426 | 0.8036 | 0.8051 |
| 0.4353 | 4.42 | 800 | 0.4393 | 0.8096 | 0.8107 |
| 0.4315 | 5.52 | 1000 | 0.4450 | 0.8019 | 0.8041 |
| 0.4221 | 6.63 | 1200 | 0.4508 | 0.8063 | 0.8086 |
| 0.4241 | 7.73 | 1400 | 0.4404 | 0.8063 | 0.8083 |
| 0.4164 | 8.84 | 1600 | 0.4509 | 0.8008 | 0.8034 |
| 0.4135 | 9.94 | 1800 | 0.4296 | 0.8136 | 0.8135 |
| 0.4082 | 11.05 | 2000 | 0.4409 | 0.8169 | 0.8176 |
| 0.4079 | 12.15 | 2200 | 0.4219 | 0.8198 | 0.8200 |
| 0.3966 | 13.26 | 2400 | 0.4283 | 0.8162 | 0.8169 |
| 0.3981 | 14.36 | 2600 | 0.4254 | 0.8216 | 0.8218 |
| 0.3954 | 15.47 | 2800 | 0.4260 | 0.8186 | 0.8190 |
| 0.3937 | 16.57 | 3000 | 0.4355 | 0.8167 | 0.8166 |
| 0.3904 | 17.68 | 3200 | 0.4203 | 0.8237 | 0.8239 |
| 0.386 | 18.78 | 3400 | 0.4323 | 0.8162 | 0.8169 |
| 0.3832 | 19.89 | 3600 | 0.4207 | 0.8223 | 0.8225 |
| 0.3835 | 20.99 | 3800 | 0.4314 | 0.8171 | 0.8176 |
| 0.3806 | 22.1 | 4000 | 0.4195 | 0.8218 | 0.8221 |
| 0.378 | 23.2 | 4200 | 0.4258 | 0.8191 | 0.8193 |
| 0.3775 | 24.31 | 4400 | 0.4465 | 0.8104 | 0.8121 |
| 0.3697 | 25.41 | 4600 | 0.4322 | 0.8245 | 0.8245 |
| 0.3747 | 26.52 | 4800 | 0.4342 | 0.8162 | 0.8166 |
| 0.3721 | 27.62 | 5000 | 0.4302 | 0.8177 | 0.8187 |
| 0.3682 | 28.73 | 5200 | 0.4241 | 0.8172 | 0.8180 |
| 0.3591 | 29.83 | 5400 | 0.4314 | 0.8182 | 0.8183 |
| 0.3624 | 30.94 | 5600 | 0.4287 | 0.8180 | 0.8183 |
| 0.3631 | 32.04 | 5800 | 0.4340 | 0.8198 | 0.8197 |
| 0.3578 | 33.15 | 6000 | 0.4265 | 0.8176 | 0.8180 |
| 0.3551 | 34.25 | 6200 | 0.4438 | 0.8204 | 0.8204 |
| 0.3542 | 35.36 | 6400 | 0.4340 | 0.8229 | 0.8232 |
| 0.3537 | 36.46 | 6600 | 0.4387 | 0.8192 | 0.8193 |
| 0.3502 | 37.57 | 6800 | 0.4388 | 0.8166 | 0.8173 |
| 0.3512 | 38.67 | 7000 | 0.4376 | 0.8155 | 0.8162 |
| 0.3476 | 39.78 | 7200 | 0.4419 | 0.8176 | 0.8180 |
| 0.3492 | 40.88 | 7400 | 0.4343 | 0.8209 | 0.8211 |
| 0.3479 | 41.99 | 7600 | 0.4364 | 0.8188 | 0.8190 |
| 0.344 | 43.09 | 7800 | 0.4412 | 0.8159 | 0.8162 |
| 0.3454 | 44.2 | 8000 | 0.4442 | 0.8134 | 0.8138 |
| 0.3414 | 45.3 | 8200 | 0.4406 | 0.8165 | 0.8166 |
| 0.3432 | 46.41 | 8400 | 0.4390 | 0.8154 | 0.8155 |
| 0.344 | 47.51 | 8600 | 0.4448 | 0.8142 | 0.8148 |
| 0.3386 | 48.62 | 8800 | 0.4412 | 0.8114 | 0.8117 |
| 0.3374 | 49.72 | 9000 | 0.4434 | 0.8154 | 0.8155 |
| 0.3409 | 50.83 | 9200 | 0.4448 | 0.8131 | 0.8138 |
| 0.336 | 51.93 | 9400 | 0.4452 | 0.8131 | 0.8135 |
| 0.3364 | 53.04 | 9600 | 0.4439 | 0.8150 | 0.8152 |
| 0.336 | 54.14 | 9800 | 0.4440 | 0.8154 | 0.8155 |
| 0.3336 | 55.25 | 10000 | 0.4458 | 0.8125 | 0.8128 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:32:10+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_8192\_512\_30M-L8\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4375
* F1 Score: 0.8244
* Accuracy: 0.8245
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.16 +/- 23.40", "name": "mean_reward", "verified": false}]}]}]} | Bluezealot/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-27T04:32:32+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4359
- F1 Score: 0.8208
- Accuracy: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5019 | 1.1 | 200 | 0.4476 | 0.8085 | 0.8086 |
| 0.4489 | 2.21 | 400 | 0.4375 | 0.8086 | 0.8093 |
| 0.4365 | 3.31 | 600 | 0.4302 | 0.8109 | 0.8114 |
| 0.4244 | 4.42 | 800 | 0.4360 | 0.8104 | 0.8114 |
| 0.4168 | 5.52 | 1000 | 0.4306 | 0.8162 | 0.8176 |
| 0.4063 | 6.63 | 1200 | 0.4478 | 0.8083 | 0.8107 |
| 0.4045 | 7.73 | 1400 | 0.4386 | 0.8063 | 0.8083 |
| 0.3952 | 8.84 | 1600 | 0.4484 | 0.7970 | 0.7999 |
| 0.3863 | 9.94 | 1800 | 0.4294 | 0.8200 | 0.8200 |
| 0.3787 | 11.05 | 2000 | 0.4395 | 0.8155 | 0.8159 |
| 0.3747 | 12.15 | 2200 | 0.4236 | 0.8245 | 0.8249 |
| 0.3582 | 13.26 | 2400 | 0.4277 | 0.8223 | 0.8228 |
| 0.36 | 14.36 | 2600 | 0.4259 | 0.8287 | 0.8287 |
| 0.3505 | 15.47 | 2800 | 0.4392 | 0.8226 | 0.8232 |
| 0.3426 | 16.57 | 3000 | 0.4368 | 0.8135 | 0.8135 |
| 0.3362 | 17.68 | 3200 | 0.4451 | 0.8124 | 0.8128 |
| 0.331 | 18.78 | 3400 | 0.4654 | 0.8132 | 0.8145 |
| 0.3216 | 19.89 | 3600 | 0.4437 | 0.8171 | 0.8173 |
| 0.3191 | 20.99 | 3800 | 0.4666 | 0.8074 | 0.8083 |
| 0.3107 | 22.1 | 4000 | 0.4690 | 0.8161 | 0.8166 |
| 0.3065 | 23.2 | 4200 | 0.4891 | 0.8091 | 0.8100 |
| 0.2999 | 24.31 | 4400 | 0.4761 | 0.8071 | 0.8079 |
| 0.2885 | 25.41 | 4600 | 0.4976 | 0.8102 | 0.8107 |
| 0.2887 | 26.52 | 4800 | 0.5042 | 0.8034 | 0.8041 |
| 0.2821 | 27.62 | 5000 | 0.5102 | 0.8063 | 0.8072 |
| 0.2758 | 28.73 | 5200 | 0.4874 | 0.8044 | 0.8044 |
| 0.2646 | 29.83 | 5400 | 0.5053 | 0.8059 | 0.8062 |
| 0.262 | 30.94 | 5600 | 0.5014 | 0.8131 | 0.8131 |
| 0.2567 | 32.04 | 5800 | 0.5043 | 0.8153 | 0.8152 |
| 0.2495 | 33.15 | 6000 | 0.5339 | 0.8105 | 0.8107 |
| 0.2469 | 34.25 | 6200 | 0.5518 | 0.8027 | 0.8027 |
| 0.2423 | 35.36 | 6400 | 0.5663 | 0.8073 | 0.8079 |
| 0.2328 | 36.46 | 6600 | 0.5792 | 0.8006 | 0.8013 |
| 0.2368 | 37.57 | 6800 | 0.5631 | 0.7976 | 0.7982 |
| 0.2311 | 38.67 | 7000 | 0.5855 | 0.7962 | 0.7975 |
| 0.2234 | 39.78 | 7200 | 0.5730 | 0.8040 | 0.8044 |
| 0.2256 | 40.88 | 7400 | 0.5779 | 0.8062 | 0.8065 |
| 0.2206 | 41.99 | 7600 | 0.5606 | 0.7999 | 0.8006 |
| 0.2135 | 43.09 | 7800 | 0.5849 | 0.8036 | 0.8041 |
| 0.2118 | 44.2 | 8000 | 0.6146 | 0.7986 | 0.7989 |
| 0.2114 | 45.3 | 8200 | 0.5932 | 0.8028 | 0.8034 |
| 0.207 | 46.41 | 8400 | 0.6012 | 0.8057 | 0.8062 |
| 0.2056 | 47.51 | 8600 | 0.6424 | 0.8006 | 0.8017 |
| 0.2007 | 48.62 | 8800 | 0.6087 | 0.8023 | 0.8027 |
| 0.2008 | 49.72 | 9000 | 0.6284 | 0.8072 | 0.8079 |
| 0.2004 | 50.83 | 9200 | 0.6236 | 0.8014 | 0.8024 |
| 0.1975 | 51.93 | 9400 | 0.6266 | 0.8048 | 0.8055 |
| 0.1932 | 53.04 | 9600 | 0.6301 | 0.8072 | 0.8076 |
| 0.1945 | 54.14 | 9800 | 0.6322 | 0.8061 | 0.8065 |
| 0.1889 | 55.25 | 10000 | 0.6349 | 0.8067 | 0.8072 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:35:46+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_8192\_512\_30M-L32\_f
==================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4359
* F1 Score: 0.8208
* Accuracy: 0.8211
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5150
- F1 Score: 0.7635
- Accuracy: 0.7652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6225 | 1.01 | 200 | 0.5992 | 0.6930 | 0.6967 |
| 0.5926 | 2.02 | 400 | 0.5784 | 0.7250 | 0.7263 |
| 0.5714 | 3.03 | 600 | 0.5663 | 0.7305 | 0.7330 |
| 0.5597 | 4.04 | 800 | 0.5512 | 0.7461 | 0.7478 |
| 0.5514 | 5.05 | 1000 | 0.5422 | 0.7456 | 0.7468 |
| 0.5466 | 6.06 | 1200 | 0.5436 | 0.7498 | 0.7525 |
| 0.5396 | 7.07 | 1400 | 0.5407 | 0.7553 | 0.7573 |
| 0.5372 | 8.08 | 1600 | 0.5417 | 0.7541 | 0.7566 |
| 0.5358 | 9.09 | 1800 | 0.5323 | 0.7580 | 0.7598 |
| 0.5312 | 10.1 | 2000 | 0.5289 | 0.7610 | 0.7623 |
| 0.5279 | 11.11 | 2200 | 0.5370 | 0.7585 | 0.7604 |
| 0.5275 | 12.12 | 2400 | 0.5309 | 0.7567 | 0.7582 |
| 0.5262 | 13.13 | 2600 | 0.5323 | 0.7604 | 0.7623 |
| 0.5265 | 14.14 | 2800 | 0.5272 | 0.7585 | 0.7607 |
| 0.521 | 15.15 | 3000 | 0.5310 | 0.7561 | 0.7585 |
| 0.5237 | 16.16 | 3200 | 0.5328 | 0.7549 | 0.7582 |
| 0.5195 | 17.17 | 3400 | 0.5343 | 0.7592 | 0.7617 |
| 0.5219 | 18.18 | 3600 | 0.5207 | 0.7611 | 0.7623 |
| 0.5183 | 19.19 | 3800 | 0.5260 | 0.7569 | 0.7595 |
| 0.5191 | 20.2 | 4000 | 0.5227 | 0.7593 | 0.7610 |
| 0.5174 | 21.21 | 4200 | 0.5325 | 0.7567 | 0.7595 |
| 0.5145 | 22.22 | 4400 | 0.5262 | 0.7607 | 0.7626 |
| 0.5122 | 23.23 | 4600 | 0.5276 | 0.7592 | 0.7620 |
| 0.5165 | 24.24 | 4800 | 0.5225 | 0.7623 | 0.7645 |
| 0.5084 | 25.25 | 5000 | 0.5206 | 0.7651 | 0.7667 |
| 0.5129 | 26.26 | 5200 | 0.5235 | 0.7639 | 0.7648 |
| 0.5106 | 27.27 | 5400 | 0.5214 | 0.7615 | 0.7636 |
| 0.5139 | 28.28 | 5600 | 0.5185 | 0.7625 | 0.7639 |
| 0.5135 | 29.29 | 5800 | 0.5295 | 0.7553 | 0.7588 |
| 0.5081 | 30.3 | 6000 | 0.5202 | 0.7638 | 0.7658 |
| 0.5099 | 31.31 | 6200 | 0.5213 | 0.7633 | 0.7652 |
| 0.5086 | 32.32 | 6400 | 0.5280 | 0.7590 | 0.7620 |
| 0.5065 | 33.33 | 6600 | 0.5239 | 0.7584 | 0.7610 |
| 0.505 | 34.34 | 6800 | 0.5262 | 0.7589 | 0.7617 |
| 0.5045 | 35.35 | 7000 | 0.5219 | 0.7656 | 0.7670 |
| 0.5098 | 36.36 | 7200 | 0.5177 | 0.7624 | 0.7645 |
| 0.5041 | 37.37 | 7400 | 0.5189 | 0.7639 | 0.7658 |
| 0.5059 | 38.38 | 7600 | 0.5194 | 0.7656 | 0.7670 |
| 0.504 | 39.39 | 7800 | 0.5201 | 0.7627 | 0.7645 |
| 0.5049 | 40.4 | 8000 | 0.5211 | 0.7654 | 0.7670 |
| 0.504 | 41.41 | 8200 | 0.5216 | 0.7599 | 0.7623 |
| 0.5073 | 42.42 | 8400 | 0.5222 | 0.7586 | 0.7610 |
| 0.5042 | 43.43 | 8600 | 0.5212 | 0.7611 | 0.7633 |
| 0.5032 | 44.44 | 8800 | 0.5197 | 0.7634 | 0.7655 |
| 0.5024 | 45.45 | 9000 | 0.5200 | 0.7652 | 0.7670 |
| 0.5023 | 46.46 | 9200 | 0.5223 | 0.7627 | 0.7648 |
| 0.5047 | 47.47 | 9400 | 0.5201 | 0.7639 | 0.7661 |
| 0.4987 | 48.48 | 9600 | 0.5215 | 0.7634 | 0.7655 |
| 0.508 | 49.49 | 9800 | 0.5202 | 0.7649 | 0.7670 |
| 0.5021 | 50.51 | 10000 | 0.5200 | 0.7645 | 0.7664 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:35:46+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_8192\_512\_30M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5150
* F1 Score: 0.7635
* Accuracy: 0.7652
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | terry69/llama2-5p-full | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:36:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# LLaMa3-8b-WangchanX-sft-Demo
Built with Meta Llama 3 (Fine tuning with Qlora)
This model is based on [WangchanX Fine-tuning Pipeline](https://github.com/vistec-AI/WangchanX).
GitHub: [WangchanX Fine-tuning Pipeline](https://github.com/vistec-AI/WangchanX).
License: [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Train Example
Train WangchanX pipeline: [Colab](https://colab.research.google.com/github/vistec-AI/WangchanX/blob/main/notebooks/Train_WangchanX_pipeline.ipynb)
## Inference Example
Run on [Colab](https://colab.research.google.com/drive/1PeUnv89Ao2uHRYYzZVOlUwoBUdYKFbLS?usp=sharing)
### Prepare your model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Model path
path = "airesearch/LLaMa3-8b-WangchanX-sft-Demo"
# Device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(path, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(path, device_map="auto")
```
### Define chat messages:
```python
messages = [
{"role": "user", "content": "ลิเก กับ งิ้ว ต่างกันอย่างไร"},
]
```
### Tokenize chat messages:
```python
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(device)
print(tokenizer.decode(tokenized_chat[0]))
```
<details close>
<summary>Output: </summary>
<br>
<pre lang="markdown">
<|user|>
ลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>
<|assistant|></pre>
</details>
### Generate responses:
```python
outputs = model.generate(tokenized_chat, max_length=2048)
print(tokenizer.decode(outputs[0]))
```
<details close>
<summary>Output: </summary>
<br>
<pre lang="markdown">
<|user|>
ลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>
<|assistant|>
ก่อนอื่นเราต้องรู้ความหมายของคำทั้งสอง คำว่า ลิเก เป็นศิลปะการแสดงแบบดั้งเดิมในประเทศไทย ส่วนคำว่า งิ้วน่าจะเป็นการนำภาษาไทยมาแปลจากคำว่า อินโดปีเลีย (indoplea) ซึ่งเป็นชื่อเรียกดนตรีที่มีต้นกำเนิดจากรัฐอุตตาร์ประเทศ ในอินเดีย และได้แพร่หลายไปยังเอเชียตะวันออกเฉียงใต้ โดยเฉพาะสาธารณรัฐประชาชนจีนและเวียดนาม จึงทำให้เกิดคำว่า งิ้วด้วย แต่ทุกคนไม่รู้ว่ามันก็คืออะไรจริง ๆ แล้ว มันมีความแตกต่างกันมาก เพราะถ้าไปถามชาวบ้านบางแห่งอาจจะบอกว่าเป็นอีกประเภทหนึ่งของเพลงโบราณหรือเพลงพื้นเมือง หรือถ้าพูดตามหลักทางประวัติศาสตร์ก็จะกล่าวว่านั่นคือ การขับร้องเพลงที่ใช้รูปแบบการประสานเสียงแบบฮินดู-ซิกห์วัล ที่ผสมผสานระหว่างภาษาอังกฤษ ภาษาจีนกลาง ภาษาพม่า และภาษาทางเหนือกับภาษาลาว รวมถึงภาษากลุ่มออสเตรโลไนว์ในอดีต ดังนั้นตอนนี้คุณสามารถสรุปได้อย่างแม่นยำว่าสองอย่างเหล่านี้แตกต่างกันอย่างไร: ลิเก คือ ศิลปะการแสดงที่มีมายาวนานกว่า 100 ปีในประเทศไทย เช่น ลิเกล้านนา, ลิเกตลุง, ลิเกล้อ ฯลฯ ขณะที่ งิ้ว หมายถึง เพลงประสานเสียงที่มีรากเหง้าของวงการเพลงคลาสสิคในอินเดีย และแพร่กระจายในเอเชียตะวันตกเฉียงใต้เป็นสิ่งแรกๆ หลังจากการเผยแผ่ศาสนายุคแรกๆ นอกจากนี้ ยังมีการรวมแนวเพลงเพื่อรวมเข้ากับการเต้นร่วมสมัยและบทละครที่มีอิทธิพลจากวรรณกรรมจีน<|end_of_text|></pre>
</details>
| {"language": ["th", "en"], "license": "llama3", "datasets": ["airesearch/concat_six_dataset_th_en"]} | airesearch/LLaMa3-8b-WangchanX-sft-Demo | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"th",
"en",
"dataset:airesearch/concat_six_dataset_th_en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:36:24+00:00 | [] | [
"th",
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #th #en #dataset-airesearch/concat_six_dataset_th_en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# LLaMa3-8b-WangchanX-sft-Demo
Built with Meta Llama 3 (Fine tuning with Qlora)
This model is based on WangchanX Fine-tuning Pipeline.
GitHub: WangchanX Fine-tuning Pipeline.
License: Meta Llama 3 Community License
Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Train Example
Train WangchanX pipeline: Colab
## Inference Example
Run on Colab
### Prepare your model and tokenizer:
### Define chat messages:
### Tokenize chat messages:
<details close>
<summary>Output: </summary>
<br>
<pre lang="markdown">
<|user|>
ลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>
<|assistant|></pre>
</details>
### Generate responses:
<details close>
<summary>Output: </summary>
<br>
<pre lang="markdown">
<|user|>
ลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>
<|assistant|>
ก่อนอื่นเราต้องรู้ความหมายของคำทั้งสอง คำว่า ลิเก เป็นศิลปะการแสดงแบบดั้งเดิมในประเทศไทย ส่วนคำว่า งิ้วน่าจะเป็นการนำภาษาไทยมาแปลจากคำว่า อินโดปีเลีย (indoplea) ซึ่งเป็นชื่อเรียกดนตรีที่มีต้นกำเนิดจากรัฐอุตตาร์ประเทศ ในอินเดีย และได้แพร่หลายไปยังเอเชียตะวันออกเฉียงใต้ โดยเฉพาะสาธารณรัฐประชาชนจีนและเวียดนาม จึงทำให้เกิดคำว่า งิ้วด้วย แต่ทุกคนไม่รู้ว่ามันก็คืออะไรจริง ๆ แล้ว มันมีความแตกต่างกันมาก เพราะถ้าไปถามชาวบ้านบางแห่งอาจจะบอกว่าเป็นอีกประเภทหนึ่งของเพลงโบราณหรือเพลงพื้นเมือง หรือถ้าพูดตามหลักทางประวัติศาสตร์ก็จะกล่าวว่านั่นคือ การขับร้องเพลงที่ใช้รูปแบบการประสานเสียงแบบฮินดู-ซิกห์วัล ที่ผสมผสานระหว่างภาษาอังกฤษ ภาษาจีนกลาง ภาษาพม่า และภาษาทางเหนือกับภาษาลาว รวมถึงภาษากลุ่มออสเตรโลไนว์ในอดีต ดังนั้นตอนนี้คุณสามารถสรุปได้อย่างแม่นยำว่าสองอย่างเหล่านี้แตกต่างกันอย่างไร: ลิเก คือ ศิลปะการแสดงที่มีมายาวนานกว่า 100 ปีในประเทศไทย เช่น ลิเกล้านนา, ลิเกตลุง, ลิเกล้อ ฯลฯ ขณะที่ งิ้ว หมายถึง เพลงประสานเสียงที่มีรากเหง้าของวงการเพลงคลาสสิคในอินเดีย และแพร่กระจายในเอเชียตะวันตกเฉียงใต้เป็นสิ่งแรกๆ หลังจากการเผยแผ่ศาสนายุคแรกๆ นอกจากนี้ ยังมีการรวมแนวเพลงเพื่อรวมเข้ากับการเต้นร่วมสมัยและบทละครที่มีอิทธิพลจากวรรณกรรมจีน<|end_of_text|></pre>
</details>
| [
"# LLaMa3-8b-WangchanX-sft-Demo\n\nBuilt with Meta Llama 3 (Fine tuning with Qlora)\n\nThis model is based on WangchanX Fine-tuning Pipeline.\n\nGitHub: WangchanX Fine-tuning Pipeline.\n\nLicense: Meta Llama 3 Community License\n\nMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.",
"## Train Example\n\nTrain WangchanX pipeline: Colab",
"## Inference Example\n\nRun on Colab",
"### Prepare your model and tokenizer:",
"### Define chat messages:",
"### Tokenize chat messages:\n\n\n\n<details close>\n <summary>Output: </summary>\n <br>\n <pre lang=\"markdown\">\n<|user|>\nลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>\n<|assistant|></pre>\n</details>",
"### Generate responses:\n\n\n\n<details close>\n <summary>Output: </summary>\n <br>\n <pre lang=\"markdown\">\n<|user|>\nลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>\n<|assistant|>\nก่อนอื่นเราต้องรู้ความหมายของคำทั้งสอง คำว่า ลิเก เป็นศิลปะการแสดงแบบดั้งเดิมในประเทศไทย ส่วนคำว่า งิ้วน่าจะเป็นการนำภาษาไทยมาแปลจากคำว่า อินโดปีเลีย (indoplea) ซึ่งเป็นชื่อเรียกดนตรีที่มีต้นกำเนิดจากรัฐอุตตาร์ประเทศ ในอินเดีย และได้แพร่หลายไปยังเอเชียตะวันออกเฉียงใต้ โดยเฉพาะสาธารณรัฐประชาชนจีนและเวียดนาม จึงทำให้เกิดคำว่า งิ้วด้วย แต่ทุกคนไม่รู้ว่ามันก็คืออะไรจริง ๆ แล้ว มันมีความแตกต่างกันมาก เพราะถ้าไปถามชาวบ้านบางแห่งอาจจะบอกว่าเป็นอีกประเภทหนึ่งของเพลงโบราณหรือเพลงพื้นเมือง หรือถ้าพูดตามหลักทางประวัติศาสตร์ก็จะกล่าวว่านั่นคือ การขับร้องเพลงที่ใช้รูปแบบการประสานเสียงแบบฮินดู-ซิกห์วัล ที่ผสมผสานระหว่างภาษาอังกฤษ ภาษาจีนกลาง ภาษาพม่า และภาษาทางเหนือกับภาษาลาว รวมถึงภาษากลุ่มออสเตรโลไนว์ในอดีต ดังนั้นตอนนี้คุณสามารถสรุปได้อย่างแม่นยำว่าสองอย่างเหล่านี้แตกต่างกันอย่างไร: ลิเก คือ ศิลปะการแสดงที่มีมายาวนานกว่า 100 ปีในประเทศไทย เช่น ลิเกล้านนา, ลิเกตลุง, ลิเกล้อ ฯลฯ ขณะที่ งิ้ว หมายถึง เพลงประสานเสียงที่มีรากเหง้าของวงการเพลงคลาสสิคในอินเดีย และแพร่กระจายในเอเชียตะวันตกเฉียงใต้เป็นสิ่งแรกๆ หลังจากการเผยแผ่ศาสนายุคแรกๆ นอกจากนี้ ยังมีการรวมแนวเพลงเพื่อรวมเข้ากับการเต้นร่วมสมัยและบทละครที่มีอิทธิพลจากวรรณกรรมจีน<|end_of_text|></pre>\n</details>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #th #en #dataset-airesearch/concat_six_dataset_th_en #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# LLaMa3-8b-WangchanX-sft-Demo\n\nBuilt with Meta Llama 3 (Fine tuning with Qlora)\n\nThis model is based on WangchanX Fine-tuning Pipeline.\n\nGitHub: WangchanX Fine-tuning Pipeline.\n\nLicense: Meta Llama 3 Community License\n\nMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.",
"## Train Example\n\nTrain WangchanX pipeline: Colab",
"## Inference Example\n\nRun on Colab",
"### Prepare your model and tokenizer:",
"### Define chat messages:",
"### Tokenize chat messages:\n\n\n\n<details close>\n <summary>Output: </summary>\n <br>\n <pre lang=\"markdown\">\n<|user|>\nลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>\n<|assistant|></pre>\n</details>",
"### Generate responses:\n\n\n\n<details close>\n <summary>Output: </summary>\n <br>\n <pre lang=\"markdown\">\n<|user|>\nลิเก กับ งิ้ว ต่างกันอย่างไร<|end_of_text|>\n<|assistant|>\nก่อนอื่นเราต้องรู้ความหมายของคำทั้งสอง คำว่า ลิเก เป็นศิลปะการแสดงแบบดั้งเดิมในประเทศไทย ส่วนคำว่า งิ้วน่าจะเป็นการนำภาษาไทยมาแปลจากคำว่า อินโดปีเลีย (indoplea) ซึ่งเป็นชื่อเรียกดนตรีที่มีต้นกำเนิดจากรัฐอุตตาร์ประเทศ ในอินเดีย และได้แพร่หลายไปยังเอเชียตะวันออกเฉียงใต้ โดยเฉพาะสาธารณรัฐประชาชนจีนและเวียดนาม จึงทำให้เกิดคำว่า งิ้วด้วย แต่ทุกคนไม่รู้ว่ามันก็คืออะไรจริง ๆ แล้ว มันมีความแตกต่างกันมาก เพราะถ้าไปถามชาวบ้านบางแห่งอาจจะบอกว่าเป็นอีกประเภทหนึ่งของเพลงโบราณหรือเพลงพื้นเมือง หรือถ้าพูดตามหลักทางประวัติศาสตร์ก็จะกล่าวว่านั่นคือ การขับร้องเพลงที่ใช้รูปแบบการประสานเสียงแบบฮินดู-ซิกห์วัล ที่ผสมผสานระหว่างภาษาอังกฤษ ภาษาจีนกลาง ภาษาพม่า และภาษาทางเหนือกับภาษาลาว รวมถึงภาษากลุ่มออสเตรโลไนว์ในอดีต ดังนั้นตอนนี้คุณสามารถสรุปได้อย่างแม่นยำว่าสองอย่างเหล่านี้แตกต่างกันอย่างไร: ลิเก คือ ศิลปะการแสดงที่มีมายาวนานกว่า 100 ปีในประเทศไทย เช่น ลิเกล้านนา, ลิเกตลุง, ลิเกล้อ ฯลฯ ขณะที่ งิ้ว หมายถึง เพลงประสานเสียงที่มีรากเหง้าของวงการเพลงคลาสสิคในอินเดีย และแพร่กระจายในเอเชียตะวันตกเฉียงใต้เป็นสิ่งแรกๆ หลังจากการเผยแผ่ศาสนายุคแรกๆ นอกจากนี้ ยังมีการรวมแนวเพลงเพื่อรวมเข้ากับการเต้นร่วมสมัยและบทละครที่มีอิทธิพลจากวรรณกรรมจีน<|end_of_text|></pre>\n</details>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | zandfj/LLaMA2-7B-Chatdpo-zf-z-f-042711-moren | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:38:32+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# miqu-evil-dpo
# **Model Details**
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.

<!-- prompt-template start -->
## Prompt template: Mistral Inst
```
<s> [INST] {inst} [/INST]
```
<!-- prompt-template end -->
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| {"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | blockblockblock/miqu-evil-dpo-bpw4.8-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:38:43+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# miqu-evil-dpo
# Model Details
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.
!image/png
## Prompt template: Mistral Inst
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| [
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] |
null | null |
# Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF
This model was converted to GGUF format from [`Kaoeiri/Keiana-L3-Test5.8-8B-14`](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.8-8B-14) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.8-8B-14) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF --model keiana-l3-test5.8-8b-14.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF --model keiana-l3-test5.8-8b-14.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m keiana-l3-test5.8-8b-14.Q6_K.gguf -n 128
```
| {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.4-8B-10", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3", "llama-cpp", "gguf-my-repo"], "base_model": ["Kaoeiri/Keiana-L3-Test5.4-8B-10", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3"]} | Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test5.4-8B-10",
"Undi95/Llama-3-LewdPlay-8B",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"llama-cpp",
"gguf-my-repo",
"base_model:Kaoeiri/Keiana-L3-Test5.4-8B-10",
"base_model:Undi95/Llama-3-LewdPlay-8B",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"region:us"
] | null | 2024-04-27T04:39:03+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.4-8B-10 #Undi95/Llama-3-LewdPlay-8B #Kaoeiri/Keiana-L3-Test4.7-8B-3 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test5.4-8B-10 #base_model-Undi95/Llama-3-LewdPlay-8B #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #region-us
|
# Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF
This model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.8-8B-14' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.8-8B-14' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.4-8B-10 #Undi95/Llama-3-LewdPlay-8B #Kaoeiri/Keiana-L3-Test4.7-8B-3 #llama-cpp #gguf-my-repo #base_model-Kaoeiri/Keiana-L3-Test5.4-8B-10 #base_model-Undi95/Llama-3-LewdPlay-8B #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #region-us \n",
"# Kaoeiri/Keiana-L3-Test5.8-8B-14-Q6_K-GGUF\nThis model was converted to GGUF format from 'Kaoeiri/Keiana-L3-Test5.8-8B-14' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | mlx |
# mlx-community/UTENA-7B-NSFW-V2-4bit
This model was converted to MLX format from [`AI-B/UTENA-7B-NSFW-V2`]().
Refer to the [original model card](https://huggingface.co/AI-B/UTENA-7B-NSFW-V2) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/UTENA-7B-NSFW-V2-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"license": "unlicense", "tags": ["mergekit", "merge", "mlx"], "base_model": ["AI-B/UTENA-7B-NSFW", "AI-B/UTENA-7B-BAGEL"], "model-index": [{"name": "UTENA-7B-NSFW-V2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 63.31, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.54, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.97, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 47.81}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.69, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 42.38, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AI-B/UTENA-7B-NSFW-V2", "name": "Open LLM Leaderboard"}}]}]} | mlx-community/UTENA-7B-NSFW-V2-4bit | null | [
"mlx",
"safetensors",
"mistral",
"mergekit",
"merge",
"base_model:AI-B/UTENA-7B-NSFW",
"base_model:AI-B/UTENA-7B-BAGEL",
"license:unlicense",
"model-index",
"region:us"
] | null | 2024-04-27T04:40:11+00:00 | [] | [] | TAGS
#mlx #safetensors #mistral #mergekit #merge #base_model-AI-B/UTENA-7B-NSFW #base_model-AI-B/UTENA-7B-BAGEL #license-unlicense #model-index #region-us
|
# mlx-community/UTENA-7B-NSFW-V2-4bit
This model was converted to MLX format from ['AI-B/UTENA-7B-NSFW-V2']().
Refer to the original model card for more details on the model.
## Use with mlx
| [
"# mlx-community/UTENA-7B-NSFW-V2-4bit\nThis model was converted to MLX format from ['AI-B/UTENA-7B-NSFW-V2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] | [
"TAGS\n#mlx #safetensors #mistral #mergekit #merge #base_model-AI-B/UTENA-7B-NSFW #base_model-AI-B/UTENA-7B-BAGEL #license-unlicense #model-index #region-us \n",
"# mlx-community/UTENA-7B-NSFW-V2-4bit\nThis model was converted to MLX format from ['AI-B/UTENA-7B-NSFW-V2']().\nRefer to the original model card for more details on the model.",
"## Use with mlx"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | terry69/zephyr-7b-sft-qlora-5p-full | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:45:15+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speecht5 finetuned nl - FredDYyy
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5332 | 5.66 | 500 | 0.4933 |
| 0.5219 | 11.32 | 1000 | 0.4798 |
| 0.5078 | 16.97 | 1500 | 0.4745 |
| 0.4991 | 22.63 | 2000 | 0.4734 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"language": ["nl"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["facebook/voxpopuli"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "Speecht5 finetuned nl - FredDYyy", "results": []}]} | FredDYyy/speecht5_finetuned_voxpopuli_nl | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:49:28+00:00 | [] | [
"nl"
] | TAGS
#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #nl #dataset-facebook/voxpopuli #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
| Speecht5 finetuned nl - FredDYyy
================================
This model is a fine-tuned version of microsoft/speecht5\_tts on the Voxpopuli dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4734
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 250
* training\_steps: 2000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 250\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #nl #dataset-facebook/voxpopuli #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 250\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | null |
# delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF
This model was converted to GGUF format from [`openlynn/Llama-3-Soliloquy-8B`](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF --model llama-3-soliloquy-8b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF --model llama-3-soliloquy-8b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-soliloquy-8b.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-04-27T04:49:49+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-sa-4.0 #region-us
|
# delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF
This model was converted to GGUF format from 'openlynn/Llama-3-Soliloquy-8B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'openlynn/Llama-3-Soliloquy-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-cc-by-nc-sa-4.0 #region-us \n",
"# delijoe/Llama-3-Soliloquy-8B-Q8_0-GGUF\nThis model was converted to GGUF format from 'openlynn/Llama-3-Soliloquy-8B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | fxmeng/PiSSA-Llama-2-7B-r64-4bit-5iter | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-27T04:49:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-jpn
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the diarizers-community/callhome dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7479
- Der: 0.2241
- False Alarm: 0.0478
- Missed Detection: 0.1332
- Confusion: 0.0431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.5757 | 1.0 | 328 | 0.7460 | 0.2299 | 0.0502 | 0.1343 | 0.0454 |
| 0.5219 | 2.0 | 656 | 0.7482 | 0.2251 | 0.0486 | 0.1340 | 0.0425 |
| 0.5067 | 3.0 | 984 | 0.7539 | 0.2259 | 0.0454 | 0.1369 | 0.0435 |
| 0.4923 | 4.0 | 1312 | 0.7453 | 0.2246 | 0.0490 | 0.1320 | 0.0436 |
| 0.5157 | 5.0 | 1640 | 0.7479 | 0.2241 | 0.0478 | 0.1332 | 0.0431 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["jpn"], "license": "apache-2.0", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/callhome"], "base_model": "openai/whisper-small", "model-index": [{"name": "speaker-segmentation-fine-tuned-callhome-jpn", "results": []}]} | heavenode/speaker-segmentation-fine-tuned-callhome-jpn | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"jpn",
"dataset:diarizers-community/callhome",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T04:52:52+00:00 | [] | [
"jpn"
] | TAGS
#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #jpn #dataset-diarizers-community/callhome #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
| speaker-segmentation-fine-tuned-callhome-jpn
============================================
This model is a fine-tuned version of openai/whisper-small on the diarizers-community/callhome dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7479
* Der: 0.2241
* False Alarm: 0.0478
* Missed Detection: 0.1332
* Confusion: 0.0431
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #pyannet #speaker-diarization #speaker-segmentation #generated_from_trainer #jpn #dataset-diarizers-community/callhome #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_iter_4
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_iter_4", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-27T04:55:30+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs256_nodpo_only4w_iter_4
This model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs256_nodpo_only4w_iter_4\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs256_nodpo_only4w_iter_4\n\nThis model is a fine-tuned version of ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_iter_3 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | null | # SkinXmed Erfahrungen Wo Kaufen - SkinXmed Bewertungen Deutschland Preis
Skinxmed Creme Erfahrungen ist eine Feuchtigkeitscreme, die von der Marke Skinxmed angeboten wird. Sie ist speziell für die Bekämpfung von Hautalterung, Falten und anderen Hautproblemen entwickelt worden. Die Creme enthält Inhaltsstoffe wie Hyaluronsäure, Kollagen und Vitamin C, die dazu beitragen, die Haut zu hydratisieren, zu straffen und das Auftreten von Falten zu reduzieren.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen](https://deutschlandbuzz.de/skinxmed-de)**
## Ubiquinone :
Ubiquinone ist besser bekannt als das Coenzym Q10.
Q10 ist eine Geheimwaffe gegen Falten, da es, wie Vitamin C, als Antioxidans wirkt und freie Radikale bekämpfen kann.
Q10 dient als Zellschutz und schützt die kollagenen Fasern vor dem Zerfall durch UV-Strahlung und oxidativem Stress.
## Retinol (Vitamin A) :
Retinol wird in der Haut zu Vitamin-A-Säure umgewandelt.
Retinol wird von Dermatologen als effizientester und wissenschaftlich erwiesener Wirkstoff gegen Falten bezeichnet, da es die Kollagenproduktion anregt und sogar sonnengeschädigte Haut reparieren kann.
## DMAE (Dimethylaminoethanol) :
DMAE ist ein natürlicher Nährstoff, der aus Fisch (u.a. Lachs, Sardinen) gewonnen wird und noch als Geheimtipp im Kampf gegen Falten gilt.
Dimethylaminoethanol verbessert die Festigkeit und Elastizität der Haut und sorgt durch einen Schutz der Zellmembran für eine längere Lebensdauer der Zellen.
DMAE ist auch dafür verantwortlich, dass mehr Acetylcholin ausgeschüttet wird, wodurch die Mikro-Muskelfasern (MYOFILAMENTE) mehr Spannung erhalten. Somit kann DMAE auch schlaffen Hautpartien entgegenwirken.
## Alteromonas Ferment Extract :
Peptid aus den Aminosäuren Lysin, Histidin und Glysin. Fördert die Wasserspeicherkapazität und Wundheilung. Regt die Kollagen- und Elastinbildung und erhöht das Feuchthaltevermögen der Haut.
## Pullulan :
Bei Pullulan handet es sich um ein Polysaccharid, welches durch einen natürlichen Fermentationsprozess aus Pflanzenextrakten gewonnen wird.
## **[Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen](https://deutschlandbuzz.de/skinxmed-de)** | {} | VKapseln475/SkinXmed120 | null | [
"region:us"
] | null | 2024-04-27T04:55:53+00:00 | [] | [] | TAGS
#region-us
| # SkinXmed Erfahrungen Wo Kaufen - SkinXmed Bewertungen Deutschland Preis
Skinxmed Creme Erfahrungen ist eine Feuchtigkeitscreme, die von der Marke Skinxmed angeboten wird. Sie ist speziell für die Bekämpfung von Hautalterung, Falten und anderen Hautproblemen entwickelt worden. Die Creme enthält Inhaltsstoffe wie Hyaluronsäure, Kollagen und Vitamin C, die dazu beitragen, die Haut zu hydratisieren, zu straffen und das Auftreten von Falten zu reduzieren.
## Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen
## Ubiquinone :
Ubiquinone ist besser bekannt als das Coenzym Q10.
Q10 ist eine Geheimwaffe gegen Falten, da es, wie Vitamin C, als Antioxidans wirkt und freie Radikale bekämpfen kann.
Q10 dient als Zellschutz und schützt die kollagenen Fasern vor dem Zerfall durch UV-Strahlung und oxidativem Stress.
## Retinol (Vitamin A) :
Retinol wird in der Haut zu Vitamin-A-Säure umgewandelt.
Retinol wird von Dermatologen als effizientester und wissenschaftlich erwiesener Wirkstoff gegen Falten bezeichnet, da es die Kollagenproduktion anregt und sogar sonnengeschädigte Haut reparieren kann.
## DMAE (Dimethylaminoethanol) :
DMAE ist ein natürlicher Nährstoff, der aus Fisch (u.a. Lachs, Sardinen) gewonnen wird und noch als Geheimtipp im Kampf gegen Falten gilt.
Dimethylaminoethanol verbessert die Festigkeit und Elastizität der Haut und sorgt durch einen Schutz der Zellmembran für eine längere Lebensdauer der Zellen.
DMAE ist auch dafür verantwortlich, dass mehr Acetylcholin ausgeschüttet wird, wodurch die Mikro-Muskelfasern (MYOFILAMENTE) mehr Spannung erhalten. Somit kann DMAE auch schlaffen Hautpartien entgegenwirken.
## Alteromonas Ferment Extract :
Peptid aus den Aminosäuren Lysin, Histidin und Glysin. Fördert die Wasserspeicherkapazität und Wundheilung. Regt die Kollagen- und Elastinbildung und erhöht das Feuchthaltevermögen der Haut.
## Pullulan :
Bei Pullulan handet es sich um ein Polysaccharid, welches durch einen natürlichen Fermentationsprozess aus Pflanzenextrakten gewonnen wird.
## Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen | [
"# SkinXmed Erfahrungen Wo Kaufen - SkinXmed Bewertungen Deutschland Preis\n\nSkinxmed Creme Erfahrungen ist eine Feuchtigkeitscreme, die von der Marke Skinxmed angeboten wird. Sie ist speziell für die Bekämpfung von Hautalterung, Falten und anderen Hautproblemen entwickelt worden. Die Creme enthält Inhaltsstoffe wie Hyaluronsäure, Kollagen und Vitamin C, die dazu beitragen, die Haut zu hydratisieren, zu straffen und das Auftreten von Falten zu reduzieren.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen",
"## Ubiquinone :\n\nUbiquinone ist besser bekannt als das Coenzym Q10.\n\nQ10 ist eine Geheimwaffe gegen Falten, da es, wie Vitamin C, als Antioxidans wirkt und freie Radikale bekämpfen kann.\n\nQ10 dient als Zellschutz und schützt die kollagenen Fasern vor dem Zerfall durch UV-Strahlung und oxidativem Stress.",
"## Retinol (Vitamin A) :\n\nRetinol wird in der Haut zu Vitamin-A-Säure umgewandelt.\n\nRetinol wird von Dermatologen als effizientester und wissenschaftlich erwiesener Wirkstoff gegen Falten bezeichnet, da es die Kollagenproduktion anregt und sogar sonnengeschädigte Haut reparieren kann.",
"## DMAE (Dimethylaminoethanol) :\n\nDMAE ist ein natürlicher Nährstoff, der aus Fisch (u.a. Lachs, Sardinen) gewonnen wird und noch als Geheimtipp im Kampf gegen Falten gilt.\n\nDimethylaminoethanol verbessert die Festigkeit und Elastizität der Haut und sorgt durch einen Schutz der Zellmembran für eine längere Lebensdauer der Zellen.\n\nDMAE ist auch dafür verantwortlich, dass mehr Acetylcholin ausgeschüttet wird, wodurch die Mikro-Muskelfasern (MYOFILAMENTE) mehr Spannung erhalten. Somit kann DMAE auch schlaffen Hautpartien entgegenwirken.",
"## Alteromonas Ferment Extract :\n\nPeptid aus den Aminosäuren Lysin, Histidin und Glysin. Fördert die Wasserspeicherkapazität und Wundheilung. Regt die Kollagen- und Elastinbildung und erhöht das Feuchthaltevermögen der Haut.",
"## Pullulan :\n\nBei Pullulan handet es sich um ein Polysaccharid, welches durch einen natürlichen Fermentationsprozess aus Pflanzenextrakten gewonnen wird.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen"
] | [
"TAGS\n#region-us \n",
"# SkinXmed Erfahrungen Wo Kaufen - SkinXmed Bewertungen Deutschland Preis\n\nSkinxmed Creme Erfahrungen ist eine Feuchtigkeitscreme, die von der Marke Skinxmed angeboten wird. Sie ist speziell für die Bekämpfung von Hautalterung, Falten und anderen Hautproblemen entwickelt worden. Die Creme enthält Inhaltsstoffe wie Hyaluronsäure, Kollagen und Vitamin C, die dazu beitragen, die Haut zu hydratisieren, zu straffen und das Auftreten von Falten zu reduzieren.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen",
"## Ubiquinone :\n\nUbiquinone ist besser bekannt als das Coenzym Q10.\n\nQ10 ist eine Geheimwaffe gegen Falten, da es, wie Vitamin C, als Antioxidans wirkt und freie Radikale bekämpfen kann.\n\nQ10 dient als Zellschutz und schützt die kollagenen Fasern vor dem Zerfall durch UV-Strahlung und oxidativem Stress.",
"## Retinol (Vitamin A) :\n\nRetinol wird in der Haut zu Vitamin-A-Säure umgewandelt.\n\nRetinol wird von Dermatologen als effizientester und wissenschaftlich erwiesener Wirkstoff gegen Falten bezeichnet, da es die Kollagenproduktion anregt und sogar sonnengeschädigte Haut reparieren kann.",
"## DMAE (Dimethylaminoethanol) :\n\nDMAE ist ein natürlicher Nährstoff, der aus Fisch (u.a. Lachs, Sardinen) gewonnen wird und noch als Geheimtipp im Kampf gegen Falten gilt.\n\nDimethylaminoethanol verbessert die Festigkeit und Elastizität der Haut und sorgt durch einen Schutz der Zellmembran für eine längere Lebensdauer der Zellen.\n\nDMAE ist auch dafür verantwortlich, dass mehr Acetylcholin ausgeschüttet wird, wodurch die Mikro-Muskelfasern (MYOFILAMENTE) mehr Spannung erhalten. Somit kann DMAE auch schlaffen Hautpartien entgegenwirken.",
"## Alteromonas Ferment Extract :\n\nPeptid aus den Aminosäuren Lysin, Histidin und Glysin. Fördert die Wasserspeicherkapazität und Wundheilung. Regt die Kollagen- und Elastinbildung und erhöht das Feuchthaltevermögen der Haut.",
"## Pullulan :\n\nBei Pullulan handet es sich um ein Polysaccharid, welches durch einen natürlichen Fermentationsprozess aus Pflanzenextrakten gewonnen wird.",
"## Klicken Sie hier, um jetzt auf der offiziellen Website von SkinXmed zu kaufen"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_8192_512_30M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5206
- F1 Score: 0.7619
- Accuracy: 0.7636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6078 | 1.01 | 200 | 0.5768 | 0.7265 | 0.7285 |
| 0.5608 | 2.02 | 400 | 0.5470 | 0.7459 | 0.7481 |
| 0.5405 | 3.03 | 600 | 0.5371 | 0.7529 | 0.7547 |
| 0.532 | 4.04 | 800 | 0.5444 | 0.7593 | 0.7607 |
| 0.5284 | 5.05 | 1000 | 0.5269 | 0.7630 | 0.7642 |
| 0.5226 | 6.06 | 1200 | 0.5249 | 0.7574 | 0.7601 |
| 0.5164 | 7.07 | 1400 | 0.5299 | 0.7616 | 0.7636 |
| 0.5132 | 8.08 | 1600 | 0.5247 | 0.7642 | 0.7664 |
| 0.5117 | 9.09 | 1800 | 0.5142 | 0.7676 | 0.7693 |
| 0.5078 | 10.1 | 2000 | 0.5164 | 0.7676 | 0.7689 |
| 0.5017 | 11.11 | 2200 | 0.5228 | 0.7648 | 0.7670 |
| 0.5005 | 12.12 | 2400 | 0.5138 | 0.7654 | 0.7670 |
| 0.5 | 13.13 | 2600 | 0.5126 | 0.7676 | 0.7696 |
| 0.497 | 14.14 | 2800 | 0.5162 | 0.7691 | 0.7708 |
| 0.4929 | 15.15 | 3000 | 0.5111 | 0.7688 | 0.7705 |
| 0.4924 | 16.16 | 3200 | 0.5206 | 0.7602 | 0.7636 |
| 0.4876 | 17.17 | 3400 | 0.5250 | 0.7669 | 0.7693 |
| 0.489 | 18.18 | 3600 | 0.5060 | 0.7712 | 0.7727 |
| 0.4838 | 19.19 | 3800 | 0.5088 | 0.7676 | 0.7696 |
| 0.4824 | 20.2 | 4000 | 0.5127 | 0.7680 | 0.7699 |
| 0.4808 | 21.21 | 4200 | 0.5221 | 0.7622 | 0.7655 |
| 0.4771 | 22.22 | 4400 | 0.5187 | 0.7665 | 0.7683 |
| 0.4737 | 23.23 | 4600 | 0.5239 | 0.7615 | 0.7645 |
| 0.4763 | 24.24 | 4800 | 0.5208 | 0.7583 | 0.7614 |
| 0.469 | 25.25 | 5000 | 0.5212 | 0.7689 | 0.7702 |
| 0.4714 | 26.26 | 5200 | 0.5193 | 0.7676 | 0.7683 |
| 0.4676 | 27.27 | 5400 | 0.5224 | 0.7577 | 0.7610 |
| 0.4703 | 28.28 | 5600 | 0.5141 | 0.7693 | 0.7708 |
| 0.4703 | 29.29 | 5800 | 0.5364 | 0.7493 | 0.7544 |
| 0.4618 | 30.3 | 6000 | 0.5225 | 0.7652 | 0.7674 |
| 0.4613 | 31.31 | 6200 | 0.5180 | 0.7674 | 0.7693 |
| 0.4607 | 32.32 | 6400 | 0.5302 | 0.7588 | 0.7620 |
| 0.4597 | 33.33 | 6600 | 0.5237 | 0.7637 | 0.7664 |
| 0.4551 | 34.34 | 6800 | 0.5226 | 0.7618 | 0.7645 |
| 0.4534 | 35.35 | 7000 | 0.5275 | 0.7698 | 0.7715 |
| 0.4586 | 36.36 | 7200 | 0.5189 | 0.7650 | 0.7670 |
| 0.452 | 37.37 | 7400 | 0.5323 | 0.7620 | 0.7642 |
| 0.4535 | 38.38 | 7600 | 0.5212 | 0.7714 | 0.7727 |
| 0.4507 | 39.39 | 7800 | 0.5250 | 0.7647 | 0.7664 |
| 0.4507 | 40.4 | 8000 | 0.5249 | 0.7656 | 0.7674 |
| 0.4477 | 41.41 | 8200 | 0.5329 | 0.7590 | 0.7623 |
| 0.4527 | 42.42 | 8400 | 0.5300 | 0.7608 | 0.7636 |
| 0.4479 | 43.43 | 8600 | 0.5286 | 0.7639 | 0.7661 |
| 0.4459 | 44.44 | 8800 | 0.5290 | 0.7644 | 0.7667 |
| 0.4477 | 45.45 | 9000 | 0.5246 | 0.7645 | 0.7667 |
| 0.4477 | 46.46 | 9200 | 0.5292 | 0.7647 | 0.7667 |
| 0.4483 | 47.47 | 9400 | 0.5295 | 0.7623 | 0.7648 |
| 0.4402 | 48.48 | 9600 | 0.5289 | 0.7635 | 0.7658 |
| 0.4483 | 49.49 | 9800 | 0.5294 | 0.7626 | 0.7652 |
| 0.4455 | 50.51 | 10000 | 0.5286 | 0.7635 | 0.7658 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_30M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_30M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:56:06+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_8192\_512\_30M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5206
* F1 Score: 0.7619
* Accuracy: 0.7636
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_8192_512_30M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5307
- F1 Score: 0.7708
- Accuracy: 0.7727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5936 | 1.01 | 200 | 0.5500 | 0.7418 | 0.7434 |
| 0.5439 | 2.02 | 400 | 0.5319 | 0.7574 | 0.7585 |
| 0.5282 | 3.03 | 600 | 0.5281 | 0.7587 | 0.7601 |
| 0.5201 | 4.04 | 800 | 0.5286 | 0.7619 | 0.7633 |
| 0.5164 | 5.05 | 1000 | 0.5161 | 0.7636 | 0.7645 |
| 0.5093 | 6.06 | 1200 | 0.5202 | 0.7612 | 0.7652 |
| 0.5001 | 7.07 | 1400 | 0.5248 | 0.7644 | 0.7661 |
| 0.495 | 8.08 | 1600 | 0.5240 | 0.7570 | 0.7598 |
| 0.4923 | 9.09 | 1800 | 0.5142 | 0.7655 | 0.7677 |
| 0.486 | 10.1 | 2000 | 0.5178 | 0.7654 | 0.7674 |
| 0.4763 | 11.11 | 2200 | 0.5245 | 0.7587 | 0.7623 |
| 0.4741 | 12.12 | 2400 | 0.5297 | 0.7624 | 0.7636 |
| 0.4687 | 13.13 | 2600 | 0.5358 | 0.7547 | 0.7576 |
| 0.4628 | 14.14 | 2800 | 0.5307 | 0.7586 | 0.7604 |
| 0.4554 | 15.15 | 3000 | 0.5252 | 0.7646 | 0.7661 |
| 0.4526 | 16.16 | 3200 | 0.5357 | 0.7520 | 0.7557 |
| 0.4434 | 17.17 | 3400 | 0.5448 | 0.7686 | 0.7699 |
| 0.4433 | 18.18 | 3600 | 0.5297 | 0.7589 | 0.7614 |
| 0.4337 | 19.19 | 3800 | 0.5311 | 0.7627 | 0.7642 |
| 0.4304 | 20.2 | 4000 | 0.5409 | 0.7545 | 0.7560 |
| 0.4271 | 21.21 | 4200 | 0.5562 | 0.7592 | 0.7617 |
| 0.4174 | 22.22 | 4400 | 0.5685 | 0.7485 | 0.7494 |
| 0.4116 | 23.23 | 4600 | 0.5677 | 0.7588 | 0.7601 |
| 0.4096 | 24.24 | 4800 | 0.5845 | 0.7590 | 0.7610 |
| 0.4007 | 25.25 | 5000 | 0.5592 | 0.7588 | 0.7598 |
| 0.3985 | 26.26 | 5200 | 0.5861 | 0.7461 | 0.7468 |
| 0.3953 | 27.27 | 5400 | 0.5780 | 0.7446 | 0.7487 |
| 0.3932 | 28.28 | 5600 | 0.5663 | 0.7539 | 0.7551 |
| 0.3865 | 29.29 | 5800 | 0.5922 | 0.7492 | 0.7522 |
| 0.38 | 30.3 | 6000 | 0.5843 | 0.7538 | 0.7551 |
| 0.375 | 31.31 | 6200 | 0.5842 | 0.7572 | 0.7582 |
| 0.3731 | 32.32 | 6400 | 0.5896 | 0.7554 | 0.7576 |
| 0.3687 | 33.33 | 6600 | 0.5929 | 0.7562 | 0.7582 |
| 0.3631 | 34.34 | 6800 | 0.5849 | 0.7518 | 0.7525 |
| 0.3608 | 35.35 | 7000 | 0.5989 | 0.7554 | 0.7563 |
| 0.3588 | 36.36 | 7200 | 0.6069 | 0.7505 | 0.7519 |
| 0.3515 | 37.37 | 7400 | 0.6105 | 0.7490 | 0.7506 |
| 0.3515 | 38.38 | 7600 | 0.5985 | 0.7498 | 0.7506 |
| 0.3478 | 39.39 | 7800 | 0.6134 | 0.7591 | 0.7598 |
| 0.3491 | 40.4 | 8000 | 0.6023 | 0.7521 | 0.7538 |
| 0.3426 | 41.41 | 8200 | 0.6247 | 0.7460 | 0.7478 |
| 0.3412 | 42.42 | 8400 | 0.6173 | 0.7472 | 0.7497 |
| 0.3379 | 43.43 | 8600 | 0.6259 | 0.7472 | 0.7487 |
| 0.3324 | 44.44 | 8800 | 0.6305 | 0.7502 | 0.7516 |
| 0.3328 | 45.45 | 9000 | 0.6280 | 0.7525 | 0.7538 |
| 0.3333 | 46.46 | 9200 | 0.6281 | 0.7516 | 0.7525 |
| 0.3336 | 47.47 | 9400 | 0.6356 | 0.7461 | 0.7478 |
| 0.3247 | 48.48 | 9600 | 0.6292 | 0.7492 | 0.7503 |
| 0.3287 | 49.49 | 9800 | 0.6318 | 0.7488 | 0.7503 |
| 0.3325 | 50.51 | 10000 | 0.6320 | 0.7503 | 0.7516 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_30M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_30M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:59:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_8192\_512\_30M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5307
* F1 Score: 0.7708
* Accuracy: 0.7727
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_8192_512_30M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_30M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_30M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4549
- F1 Score: 0.8061
- Accuracy: 0.8076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5681 | 0.92 | 200 | 0.5310 | 0.7403 | 0.7437 |
| 0.5184 | 1.83 | 400 | 0.5164 | 0.7536 | 0.7569 |
| 0.4995 | 2.75 | 600 | 0.5032 | 0.7640 | 0.7666 |
| 0.4997 | 3.67 | 800 | 0.4884 | 0.7806 | 0.7815 |
| 0.4836 | 4.59 | 1000 | 0.4878 | 0.7814 | 0.7827 |
| 0.478 | 5.5 | 1200 | 0.4788 | 0.7802 | 0.7821 |
| 0.4755 | 6.42 | 1400 | 0.4785 | 0.7881 | 0.7890 |
| 0.4711 | 7.34 | 1600 | 0.4849 | 0.7825 | 0.7847 |
| 0.4658 | 8.26 | 1800 | 0.4783 | 0.7875 | 0.7887 |
| 0.4712 | 9.17 | 2000 | 0.4739 | 0.7878 | 0.7893 |
| 0.4662 | 10.09 | 2200 | 0.4862 | 0.7776 | 0.7804 |
| 0.461 | 11.01 | 2400 | 0.4679 | 0.7887 | 0.7901 |
| 0.4578 | 11.93 | 2600 | 0.4647 | 0.7914 | 0.7924 |
| 0.4586 | 12.84 | 2800 | 0.4689 | 0.7915 | 0.7933 |
| 0.4547 | 13.76 | 3000 | 0.4756 | 0.7876 | 0.7896 |
| 0.4532 | 14.68 | 3200 | 0.4659 | 0.7920 | 0.7930 |
| 0.4548 | 15.6 | 3400 | 0.4649 | 0.7911 | 0.7930 |
| 0.4519 | 16.51 | 3600 | 0.4671 | 0.7924 | 0.7939 |
| 0.4503 | 17.43 | 3800 | 0.4612 | 0.7949 | 0.7962 |
| 0.446 | 18.35 | 4000 | 0.4679 | 0.7911 | 0.7927 |
| 0.4499 | 19.27 | 4200 | 0.4675 | 0.7931 | 0.7947 |
| 0.4497 | 20.18 | 4400 | 0.4767 | 0.7893 | 0.7916 |
| 0.4435 | 21.1 | 4600 | 0.4728 | 0.7908 | 0.7924 |
| 0.4458 | 22.02 | 4800 | 0.4701 | 0.7900 | 0.7916 |
| 0.4448 | 22.94 | 5000 | 0.4614 | 0.7937 | 0.7950 |
| 0.4416 | 23.85 | 5200 | 0.4630 | 0.7908 | 0.7924 |
| 0.4428 | 24.77 | 5400 | 0.4784 | 0.7893 | 0.7916 |
| 0.4397 | 25.69 | 5600 | 0.4661 | 0.7935 | 0.7950 |
| 0.442 | 26.61 | 5800 | 0.4639 | 0.7935 | 0.7947 |
| 0.4428 | 27.52 | 6000 | 0.4802 | 0.7897 | 0.7919 |
| 0.4383 | 28.44 | 6200 | 0.4652 | 0.7940 | 0.7956 |
| 0.4398 | 29.36 | 6400 | 0.4696 | 0.7921 | 0.7942 |
| 0.4394 | 30.28 | 6600 | 0.4685 | 0.7910 | 0.7930 |
| 0.4391 | 31.19 | 6800 | 0.4645 | 0.7923 | 0.7936 |
| 0.4387 | 32.11 | 7000 | 0.4687 | 0.7902 | 0.7921 |
| 0.4353 | 33.03 | 7200 | 0.4680 | 0.7920 | 0.7936 |
| 0.4356 | 33.94 | 7400 | 0.4722 | 0.7940 | 0.7956 |
| 0.4373 | 34.86 | 7600 | 0.4678 | 0.7919 | 0.7936 |
| 0.4358 | 35.78 | 7800 | 0.4660 | 0.7897 | 0.7913 |
| 0.4368 | 36.7 | 8000 | 0.4675 | 0.7925 | 0.7942 |
| 0.4353 | 37.61 | 8200 | 0.4743 | 0.7901 | 0.7924 |
| 0.4357 | 38.53 | 8400 | 0.4652 | 0.7928 | 0.7942 |
| 0.4339 | 39.45 | 8600 | 0.4704 | 0.7911 | 0.7927 |
| 0.4338 | 40.37 | 8800 | 0.4763 | 0.7909 | 0.7930 |
| 0.4379 | 41.28 | 9000 | 0.4672 | 0.7916 | 0.7936 |
| 0.4327 | 42.2 | 9200 | 0.4660 | 0.7918 | 0.7933 |
| 0.4315 | 43.12 | 9400 | 0.4690 | 0.7917 | 0.7933 |
| 0.4339 | 44.04 | 9600 | 0.4683 | 0.7926 | 0.7944 |
| 0.4328 | 44.95 | 9800 | 0.4696 | 0.7923 | 0.7942 |
| 0.4322 | 45.87 | 10000 | 0.4688 | 0.7916 | 0.7933 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_30M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_30M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_30M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_30M",
"region:us"
] | null | 2024-04-27T04:59:22+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_8192\_512\_30M-L1\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_30M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4549
* F1 Score: 0.8061
* Accuracy: 0.8076
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_30M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.