pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF
This model was converted to GGUF format from [`alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline`](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF --model bloom-1b7-creative-writing-it-baseline.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF --model bloom-1b7-creative-writing-it-baseline.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bloom-1b7-creative-writing-it-baseline.Q8_0.gguf -n 128
```
|
{"license": "bigscience-bloom-rail-1.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "bigscience/bloom-1b7", "model-index": [{"name": "Bloom-1b7-creative-writing-IT", "results": []}]}
|
DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF
| null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:bigscience/bloom-1b7",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null |
2024-04-15T01:37:49+00:00
|
[] |
[] |
TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-bigscience/bloom-1b7 #license-bigscience-bloom-rail-1.0 #region-us
|
# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF
This model was converted to GGUF format from 'alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF\nThis model was converted to GGUF format from 'alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-bigscience/bloom-1b7 #license-bigscience-bloom-rail-1.0 #region-us \n",
"# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q8_0-GGUF\nThis model was converted to GGUF format from 'alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
alexyhc/flan-t5-large-ds
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T01:38:17+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF
This model was converted to GGUF format from [`alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline`](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF --model bloom-1b7-creative-writing-it-baseline.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF --model bloom-1b7-creative-writing-it-baseline.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bloom-1b7-creative-writing-it-baseline.Q6_K.gguf -n 128
```
|
{"license": "bigscience-bloom-rail-1.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "bigscience/bloom-1b7", "model-index": [{"name": "Bloom-1b7-creative-writing-IT", "results": []}]}
|
DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF
| null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:bigscience/bloom-1b7",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null |
2024-04-15T01:40:00+00:00
|
[] |
[] |
TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-bigscience/bloom-1b7 #license-bigscience-bloom-rail-1.0 #region-us
|
# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF
This model was converted to GGUF format from 'alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF\nThis model was converted to GGUF format from 'alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-bigscience/bloom-1b7 #license-bigscience-bloom-rail-1.0 #region-us \n",
"# DavidAU/Bloom-1b7-creative-writing-IT-baseline-Q6_K-GGUF\nThis model was converted to GGUF format from 'alonzogarbanzo/Bloom-1b7-creative-writing-IT-baseline' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF
This model was converted to GGUF format from [`FPHam/Writing_Partner_Mistral_7B`](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF --model writing_partner_mistral_7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF --model writing_partner_mistral_7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m writing_partner_mistral_7b.Q6_K.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["mistral", "instruct", "finetune", "chatml", "gpt4", "llama-cpp", "gguf-my-repo"]}
|
DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF
| null |
[
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T01:41:04+00:00
|
[] |
[
"en"
] |
TAGS
#gguf #mistral #instruct #finetune #chatml #gpt4 #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
|
# DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF
This model was converted to GGUF format from 'FPHam/Writing_Partner_Mistral_7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'FPHam/Writing_Partner_Mistral_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #mistral #instruct #finetune #chatml #gpt4 #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n",
"# DavidAU/Writing_Partner_Mistral_7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'FPHam/Writing_Partner_Mistral_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemini-1.5-pro-gemma-rewrite-1024
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemini-1.5-pro-gemma-rewrite-1024", "results": []}]}
|
mooo16/gemini-1.5-pro-gemma-rewrite-1024
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-15T01:42:06+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
|
# gemini-1.5-pro-gemma-rewrite-1024
This model is a fine-tuned version of google/gemma-2b on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# gemini-1.5-pro-gemma-rewrite-1024\n\nThis model is a fine-tuned version of google/gemma-2b on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0246",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n",
"# gemini-1.5-pro-gemma-rewrite-1024\n\nThis model is a fine-tuned version of google/gemma-2b on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0246",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# lust-7b
experimental rp model.
## prompt format
this one's a bit funky.
```
<|description|>Character
Character is blah blah blah</s>
<|description|>Character 2
Character 2 is blah blah blah (optional to make more than one)</s>
<|narrator|>
Describe what you want to happen in the scenario (I dont even know if this works)
<|message|>Character
Character does blah blah blah</s>
<|message|>Character 2
Character 2 does blah blah blah</s>
<|message|>Character
[start model generation here!]
```
sillytavern templates: TODO
## quants
gguf: https://huggingface.co/mradermacher/lust-7b-GGUF (thanks @mradermacher!)
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["roleplay", "conversational", "trl", "unsloth"], "datasets": ["Fizzarolli/rpguild_processed", "Fizzarolli/bluemoon_processeed"]}
|
Fizzarolli/lust-7b
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"conversational",
"trl",
"unsloth",
"en",
"dataset:Fizzarolli/rpguild_processed",
"dataset:Fizzarolli/bluemoon_processeed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T01:42:29+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #roleplay #conversational #trl #unsloth #en #dataset-Fizzarolli/rpguild_processed #dataset-Fizzarolli/bluemoon_processeed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# lust-7b
experimental rp model.
## prompt format
this one's a bit funky.
sillytavern templates: TODO
## quants
gguf: URL (thanks @mradermacher!)
|
[
"# lust-7b\nexperimental rp model.",
"## prompt format\nthis one's a bit funky.\n\nsillytavern templates: TODO",
"## quants\ngguf: URL (thanks @mradermacher!)"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #roleplay #conversational #trl #unsloth #en #dataset-Fizzarolli/rpguild_processed #dataset-Fizzarolli/bluemoon_processeed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# lust-7b\nexperimental rp model.",
"## prompt format\nthis one's a bit funky.\n\nsillytavern templates: TODO",
"## quants\ngguf: URL (thanks @mradermacher!)"
] |
automatic-speech-recognition
|
transformers
|
# Model Card for Model ID

<!-- Generated using cagliostrolab/animagine-xl-3.0 -->
<!--Prompt: 1girl, black long hair, suit, headphone, write down, upper body, indoor, night, masterpiece, best quality -->
Fine tunned ASR model from [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2).
This model aimed to transcribe japanese audio especially visual novel.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** Seq2Seq
- **Language(s) (NLP):** japanese
- **Finetuned from model :** [distil-whisper/distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2).
## Uses
```python
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
import librosa
processor = AutoProcessor.from_pretrained('spow12/Visual-novel-transcriptor', language="ja", task="transcribe")
model = AutoModelForSpeechSeq2Seq.from_pretrained('spow12/Visual-novel-transcriptor').cuda()
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="ja", task="transcribe")
data, _ = librosa.load(wav_path, sr=16000)
input_features = processor(data, sampling_rate=16000, return_tensors="pt").input_features.cuda()
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription[0])
```
## Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
## Use & Credit
This model is currently available for non-commercial use only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons).
## Citation
```bibtex
@misc {Visual-novel-transcriptor,
author = { {YoungWoo Nam} },
title = { Visual-novel-transcriptor },
year = 2024,
url = { https://huggingface.co/spow12/Visual-novel-transcriptor },
publisher = { Hugging Face }
}
```
|
{"language": ["ja", "en"], "library_name": "transformers", "datasets": ["reazon-research/reazonspeech", "joujiboi/japanese-anime-speech"], "metrics": ["cer"], "pipeline_tag": "automatic-speech-recognition"}
|
spow12/Visual-novel-transcriptor
| null |
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ja",
"en",
"dataset:reazon-research/reazonspeech",
"dataset:joujiboi/japanese-anime-speech",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2024-04-15T01:43:07+00:00
|
[] |
[
"ja",
"en"
] |
TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #ja #en #dataset-reazon-research/reazonspeech #dataset-joujiboi/japanese-anime-speech #endpoints_compatible #has_space #region-us
|
# Model Card for Model ID
!image
Fine tunned ASR model from distil-whisper/distil-large-v2.
This model aimed to transcribe japanese audio especially visual novel.
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: spow12(yw_nam)
- Shared by : spow12(yw_nam)
- Model type: Seq2Seq
- Language(s) (NLP): japanese
- Finetuned from model : distil-whisper/distil-large-v2.
## Uses
## Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
## Use & Credit
This model is currently available for non-commercial use only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons).
|
[
"# Model Card for Model ID\n\n!image\n\n\n\n\n\nFine tunned ASR model from distil-whisper/distil-large-v2.\n\nThis model aimed to transcribe japanese audio especially visual novel.",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: spow12(yw_nam)\n- Shared by : spow12(yw_nam)\n- Model type: Seq2Seq\n- Language(s) (NLP): japanese\n- Finetuned from model : distil-whisper/distil-large-v2.",
"## Uses",
"## Bias, Risks, and Limitations\n\nThis model trained by japanese dataset included visual novel which contain nsfw content.",
"## Use & Credit\n\nThis model is currently available for non-commercial use only. Also, since I'm not detailed in licensing, I hope you use it responsibly. \n\nBy sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons)."
] |
[
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #ja #en #dataset-reazon-research/reazonspeech #dataset-joujiboi/japanese-anime-speech #endpoints_compatible #has_space #region-us \n",
"# Model Card for Model ID\n\n!image\n\n\n\n\n\nFine tunned ASR model from distil-whisper/distil-large-v2.\n\nThis model aimed to transcribe japanese audio especially visual novel.",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: spow12(yw_nam)\n- Shared by : spow12(yw_nam)\n- Model type: Seq2Seq\n- Language(s) (NLP): japanese\n- Finetuned from model : distil-whisper/distil-large-v2.",
"## Uses",
"## Bias, Risks, and Limitations\n\nThis model trained by japanese dataset included visual novel which contain nsfw content.",
"## Use & Credit\n\nThis model is currently available for non-commercial use only. Also, since I'm not detailed in licensing, I hope you use it responsibly. \n\nBy sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons)."
] |
text-generation
|
transformers
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
|
frcp/jobtalks_llama_v1
| null |
[
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:43:20+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
|
[
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
[
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
null | null |
# andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-2.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-2.8b-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-2.8b-v1.0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eeve-korean-2.8b-v1.0.Q8_0.gguf -n 128
```
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "microsoft/phi-2", "model-index": [{"name": "yanolja/EEVE-Korean-2.8B-v1.0", "results": []}]}
|
andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF
| null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T01:43:57+00:00
|
[] |
[] |
TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-microsoft/phi-2 #license-apache-2.0 #region-us
|
# andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from 'yanolja/EEVE-Korean-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF\nThis model was converted to GGUF format from 'yanolja/EEVE-Korean-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-microsoft/phi-2 #license-apache-2.0 #region-us \n",
"# andreass123/EEVE-Korean-2.8B-v1.0-Q8_0-GGUF\nThis model was converted to GGUF format from 'yanolja/EEVE-Korean-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF
This model was converted to GGUF format from [`FPHam/Writing_Partner_Mistral_7B`](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF --model writing_partner_mistral_7b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF --model writing_partner_mistral_7b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m writing_partner_mistral_7b.Q8_0.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["mistral", "instruct", "finetune", "chatml", "gpt4", "llama-cpp", "gguf-my-repo"]}
|
DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF
| null |
[
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T01:44:46+00:00
|
[] |
[
"en"
] |
TAGS
#gguf #mistral #instruct #finetune #chatml #gpt4 #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
|
# DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF
This model was converted to GGUF format from 'FPHam/Writing_Partner_Mistral_7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'FPHam/Writing_Partner_Mistral_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #mistral #instruct #finetune #chatml #gpt4 #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n",
"# DavidAU/Writing_Partner_Mistral_7B-Q8_0-GGUF\nThis model was converted to GGUF format from 'FPHam/Writing_Partner_Mistral_7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# What is is?
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Worse than V1 in logic, but better in expression.
# GGUF Version?
[Here](https://huggingface.co/Alsebay/NaruMOE-3x7B-v2-GGUF/)
# Recipe?
You could see base model section
# Why 3x7B?
I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.
|
{"license": "cc-by-nc-4.0", "tags": ["moe", "merge", "roleplay", "Roleplay"], "base_model": ["Alsebay/NarumashiRTS-V2", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "Nitral-AI/KukulStanta-7B"]}
|
Alsebay/NaruMOE-3x7B-v2
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"roleplay",
"Roleplay",
"base_model:Alsebay/NarumashiRTS-V2",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:Nitral-AI/KukulStanta-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T01:44:57+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #moe #merge #roleplay #Roleplay #base_model-Alsebay/NarumashiRTS-V2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Nitral-AI/KukulStanta-7B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# What is is?
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
Worse than V1 in logic, but better in expression.
# GGUF Version?
Here
# Recipe?
You could see base model section
# Why 3x7B?
I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.
|
[
"# What is is?\n\nA MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).\n\nAdapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.\n\nWorse than V1 in logic, but better in expression.",
"# GGUF Version?\nHere",
"# Recipe?\n\nYou could see base model section",
"# Why 3x7B?\n\nI test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use."
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #merge #roleplay #Roleplay #base_model-Alsebay/NarumashiRTS-V2 #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-Nitral-AI/KukulStanta-7B #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# What is is?\n\nA MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).\n\nAdapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.\n\nWorse than V1 in logic, but better in expression.",
"# GGUF Version?\nHere",
"# Recipe?\n\nYou could see base model section",
"# Why 3x7B?\n\nI test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use."
] |
reinforcement-learning
|
stable-baselines3
|
# **A2C** Agent playing **PandaReachDense-v3**
## General information about the project:
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). It controls a robotic arm to pick up balls.
### What I did:
Manually tuned hyperparameters by adding "learning_rate=0.0007, n_steps=5, gamma=0.99, gae_lambda=0.95" to the A2C model.
```
model = A2C(policy = "MultiInputPolicy",
env = env,
learning_rate=0.0007,
n_steps=5,
gamma=0.99,
gae_lambda=0.95,
verbose=1)
```
## Links to relevant resources such as tutorials.
Reinforcement Learning Tips and Tricks: https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html
A Github Training Framework : https://github.com/DLR-RM/rl-baselines3-zoo
Poe (GPT-4): Showed me how to use Optuna to do automated hyperparameter optimization, but I was still understanding how it worked and couldn't get it to run properly.
```
import optuna
from stable_baselines3 import A2C
from stable_baselines3.common.env_util import make_vec_env
def optimize_agent(trial):
learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1)
gamma = trial.suggest_uniform('gamma', 0.8, 0.9999)
gae_lambda = trial.suggest_uniform('gae_lambda', 0.8, 0.99)
n_steps = trial.suggest_int('n_steps', 5, 20)
model = A2C('MlpPolicy', env, verbose=0, learning_rate=learning_rate, gamma=gamma, gae_lambda=gae_lambda, n_steps=n_steps)
model.learn(total_timesteps=5000)
rewards = sum(model.rollout_buffer.rewards)
return rewards
study = optuna.create_study(direction='maximize')
study.optimize(optimize_agent, n_trials=100)
print('Best hyperparameters:', study.best_params)
```
|
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.24 +/- 0.09", "name": "mean_reward", "verified": false}]}]}]}
|
daenielkim-66/a2c-PandaReachDense-v3
| null |
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T01:44:57+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# A2C Agent playing PandaReachDense-v3
## General information about the project:
This is a trained model of a A2C agent playing PandaReachDense-v3
using the stable-baselines3 library. It controls a robotic arm to pick up balls.
### What I did:
Manually tuned hyperparameters by adding "learning_rate=0.0007, n_steps=5, gamma=0.99, gae_lambda=0.95" to the A2C model.
## Links to relevant resources such as tutorials.
Reinforcement Learning Tips and Tricks: URL
A Github Training Framework : URL
Poe (GPT-4): Showed me how to use Optuna to do automated hyperparameter optimization, but I was still understanding how it worked and couldn't get it to run properly.
|
[
"# A2C Agent playing PandaReachDense-v3",
"## General information about the project:\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library. It controls a robotic arm to pick up balls.",
"### What I did:\nManually tuned hyperparameters by adding \"learning_rate=0.0007, n_steps=5, gamma=0.99, gae_lambda=0.95\" to the A2C model.",
"## Links to relevant resources such as tutorials.\nReinforcement Learning Tips and Tricks: URL\n\nA Github Training Framework : URL\n\nPoe (GPT-4): Showed me how to use Optuna to do automated hyperparameter optimization, but I was still understanding how it worked and couldn't get it to run properly."
] |
[
"TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# A2C Agent playing PandaReachDense-v3",
"## General information about the project:\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library. It controls a robotic arm to pick up balls.",
"### What I did:\nManually tuned hyperparameters by adding \"learning_rate=0.0007, n_steps=5, gamma=0.99, gae_lambda=0.95\" to the A2C model.",
"## Links to relevant resources such as tutorials.\nReinforcement Learning Tips and Tricks: URL\n\nA Github Training Framework : URL\n\nPoe (GPT-4): Showed me how to use Optuna to do automated hyperparameter optimization, but I was still understanding how it worked and couldn't get it to run properly."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Cantonese - Daniel Chan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2611
- Wer: 55.8860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2222 | 1.14 | 1000 | 0.2847 | 63.1879 |
| 0.1146 | 2.28 | 2000 | 0.2592 | 58.2725 |
| 0.0382 | 3.42 | 3000 | 0.2575 | 55.9216 |
| 0.024 | 4.57 | 4000 | 0.2611 | 55.8860 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Cantonese - Daniel Chan", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "zh-HK", "split": "None", "args": "config: Cantonese, split: test"}, "metrics": [{"type": "wer", "value": 55.88601959038291, "name": "Wer"}]}]}]}
|
chandc/whisper-small-Cantonese
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:45:12+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Whisper Small Cantonese - Daniel Chan
=====================================
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2611
* Wer: 55.8860
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
### Training results
### Framework versions
* Transformers 4.38.1
* Pytorch 2.2.0
* Datasets 2.17.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.0\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.0\n* Datasets 2.17.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-microsoft-phi2-on-dialogsum
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4203 | 5.0 | 50 | 1.3966 |
| 1.2814 | 10.0 | 100 | 1.3639 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.1
|
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "sft-microsoft-phi2-on-dialogsum", "results": []}]}
|
agitohere/sft-microsoft-phi2-on-dialogsum
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-15T01:45:40+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
|
sft-microsoft-phi2-on-dialogsum
===============================
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3639
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 5
* total\_train\_batch\_size: 10
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 50
* training\_steps: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.1.2
* Datasets 2.15.0
* Tokenizers 0.15.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.1"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 5\n* total\\_train\\_batch\\_size: 10\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.15.0\n* Tokenizers 0.15.1"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/orpo-explorers/mistral-7b-orpo-v3.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-7b-orpo-v3.0-GGUF/resolve/main/mistral-7b-orpo-v3.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer", "trl", "orpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/distilabel-capybara-dpo-7k-binarized", "HuggingFaceH4/OpenHermesPreferences-10k"], "base_model": "orpo-explorers/mistral-7b-orpo-v3.0", "quantized_by": "mradermacher"}
|
mradermacher/mistral-7b-orpo-v3.0-GGUF
| null |
[
"transformers",
"gguf",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/distilabel-capybara-dpo-7k-binarized",
"dataset:HuggingFaceH4/OpenHermesPreferences-10k",
"base_model:orpo-explorers/mistral-7b-orpo-v3.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:45:47+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #alignment-handbook #trl #orpo #generated_from_trainer #en #dataset-HuggingFaceH4/distilabel-capybara-dpo-7k-binarized #dataset-HuggingFaceH4/OpenHermesPreferences-10k #base_model-orpo-explorers/mistral-7b-orpo-v3.0 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #alignment-handbook #trl #orpo #generated_from_trainer #en #dataset-HuggingFaceH4/distilabel-capybara-dpo-7k-binarized #dataset-HuggingFaceH4/OpenHermesPreferences-10k #base_model-orpo-explorers/mistral-7b-orpo-v3.0 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BWangila/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
|
BWangila/ppo-SnowballTarget
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | null |
2024-04-15T01:46:29+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
|
# ppo Agent playing SnowballTarget
This is a trained model of a ppo agent playing SnowballTarget
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: BWangila/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
|
[
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: BWangila/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n",
"# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: BWangila/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Erfan-Shayegani/llama2-lora_Unlearned_bad_weight_5e-2
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:47:38+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-Instruct-2.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eeve-korean-instruct-2.8b-v1.0.Q4_K_M.gguf -n 128
```
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "yanolja/EEVE-Korean-2.8B-v1.0", "model-index": [{"name": "yanolja/EEVE-Korean-Instruct-2.8B-v1.0", "results": []}]}
|
andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF
| null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-2.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T01:48:34+00:00
|
[] |
[] |
TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-yanolja/EEVE-Korean-2.8B-v1.0 #license-apache-2.0 #region-us
|
# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from 'yanolja/EEVE-Korean-Instruct-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'yanolja/EEVE-Korean-Instruct-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-yanolja/EEVE-Korean-2.8B-v1.0 #license-apache-2.0 #region-us \n",
"# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'yanolja/EEVE-Korean-Instruct-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
reinforcement-learning
|
stable-baselines3
|
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.26 +/- 0.13", "name": "mean_reward", "verified": false}]}]}]}
|
ashwanth18/a2c-PandaReachDense-v3
| null |
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T01:50:24+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# A2C Agent playing PandaReachDense-v3
This is a trained model of a A2C agent playing PandaReachDense-v3
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
harikrishnad1997/emotion_tweet_t5-base_2024-04-15
| null |
[
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T01:50:26+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #t5 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #t5 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|

## VAGO solutions SauerkrautLM-Qwen-32b
Introducing **SauerkrautLM-Qwen-32b** – our Sauerkraut version of the powerful [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)!
The model **SauerkrautLM-Qwen-32b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
- Finetuned with **SFT**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Qwen-32b](#all-SauerkrautLM-Qwen-32b)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Qwen-32b
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Qwen-32b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Qwen-32b) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Qwen-32b**
- **Model Type:** SauerkrautLM-Qwen-32b is a finetuned Model based on [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)
- **Language(s):** German, English
- **License:** [tongyi-qianwen-research](https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSEs)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model for 2 epochs on 160k data samples with SFT.
- Afterwards we applied DPO for 1 epoch with 110k data.
- LaserRMT version coming soon
**We teached German language skills on this model.** As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
**English:**
```
<|im_start|>system
You are SauerkrautLM, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
**German:**
```
<|im_start|>system
Du bist SauerkrautLM, ein hilfreicher und freundlicher KI-Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Example output of german language:
```
Die Karte zeigte eine verborgene Höhle, die in den umliegenden Bergen zu finden war. Mia war von der Idee fasziniert, diese Höhle zu entdecken und ihre Geheimnisse zu lüften. Sie nahm die Karte mit sich und machte sich auf den Weg, um die Höhle zu finden.
Die Wanderung zu den Bergen war eine Herausforderung, aber Mia war fest entschlossen, ihr Abenteuer zu vollenden. Sie überwand steinige Wege und überquerte klirrende Bäche, die ihre Füße kühlten und ihr die Energie für den Rest des Weges gab.
Endlich erreichte Mia die Höhle, die von einem dichten Wald umgeben war. Die Höhle war ein Ort der Geheimnisse und des Staunens, der ihr Herz höher schlagen ließ. Sie betrat die Höhle, und die Dunkelheit umhüllte sie wie ein Schleier aus Stille.
In der Höhle fand Mia eine alte Schatzkiste, die mit einem alten, verwitterten Holz verziert war. Mit zitternden Händen öffnete sie die Schatzkiste und fand darin eine alte, zerfledderte Schriftrolle. Die Schriftrolle war ein geheimnisvolles Artefakt, das ihr die Geschichte der Höhle offenbarte.
```
## Evaluation
**Open LLM Leaderboard:**
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **73.11** |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 82.32 |
| MMLU (5-shot) | 74.40|
| TruthfulQA (0-shot) | 61.03 |
| Winogrande (5-shot) | 82.16 |
| GSM8K (5-shot) | 79.53 |
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community
|
{"language": ["de", "en"], "license": "other", "tags": ["sft", "dpo"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE"}
|
blockblockblock/SauerkrautLM-Qwen-32b-bpw2.25
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"sft",
"dpo",
"conversational",
"de",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T01:51:19+00:00
|
[] |
[
"de",
"en"
] |
TAGS
#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!SauerkrautLM
VAGO solutions SauerkrautLM-Qwen-32b
------------------------------------
Introducing SauerkrautLM-Qwen-32b – our Sauerkraut version of the powerful Qwen/Qwen1.5-32B!
The model SauerkrautLM-Qwen-32b is a joint effort between VAGO solutions and URL.
* Finetuned with SFT
* Aligned with DPO
Table of Contents
=================
1. Overview of all SauerkrautLM-Qwen-32b
2. Model Details
* Prompt template
* Training procedure
3. Evaluation
4. Disclaimer
5. Contact
6. Collaborations
7. Acknowledgement
All SauerkrautLM-Qwen-32b
-------------------------
Model Details
-------------
SauerkrautLM-Qwen-32b
* Model Type: SauerkrautLM-Qwen-32b is a finetuned Model based on Qwen/Qwen1.5-32B
* Language(s): German, English
* License: tongyi-qianwen-research
* Contact: VAGO solutions, URL
### Training procedure:
* We trained this model for 2 epochs on 160k data samples with SFT.
* Afterwards we applied DPO for 1 epoch with 110k data.
* LaserRMT version coming soon
We teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
English:
German:
### Example output of german language:
Evaluation
----------
Open LLM Leaderboard:
Disclaimer
----------
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
Contact
-------
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
Collaborations
--------------
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer
Acknowledgement
---------------
Many thanks to Qwen for providing such valuable model to the Open-Source community
|
[
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
null | null |
# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-Instruct-2.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF --model eeve-korean-instruct-2.8b-v1.0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eeve-korean-instruct-2.8b-v1.0.Q8_0.gguf -n 128
```
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "yanolja/EEVE-Korean-2.8B-v1.0", "model-index": [{"name": "yanolja/EEVE-Korean-Instruct-2.8B-v1.0", "results": []}]}
|
andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF
| null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-2.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T01:53:12+00:00
|
[] |
[] |
TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-yanolja/EEVE-Korean-2.8B-v1.0 #license-apache-2.0 #region-us
|
# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF
This model was converted to GGUF format from 'yanolja/EEVE-Korean-Instruct-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF\nThis model was converted to GGUF format from 'yanolja/EEVE-Korean-Instruct-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-yanolja/EEVE-Korean-2.8B-v1.0 #license-apache-2.0 #region-us \n",
"# andreass123/EEVE-Korean-Instruct-2.8B-v1.0-Q8_0-GGUF\nThis model was converted to GGUF format from 'yanolja/EEVE-Korean-Instruct-2.8B-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-accelerate1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0660
- Precision: 0.9330
- Recall: 0.9512
- F1: 0.9420
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0377 | 1.0 | 1756 | 0.0631 | 0.9229 | 0.9392 | 0.9310 | 0.9844 |
| 0.0199 | 2.0 | 3512 | 0.0668 | 0.9343 | 0.9451 | 0.9397 | 0.9858 |
| 0.0095 | 3.0 | 5268 | 0.0660 | 0.9330 | 0.9512 | 0.9420 | 0.9869 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner-accelerate1", "results": []}]}
|
BrandonM001/bert-finetuned-ner-accelerate1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:54:13+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-ner-accelerate1
==============================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0660
* Precision: 0.9330
* Recall: 0.9512
* F1: 0.9420
* Accuracy: 0.9869
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_classification_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0442
- Precision: 0.8034
- Recall: 0.7769
- F1: 0.7899
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.0448 | 0.4732 | 0.4380 | 0.4549 | 0.9866 |
| No log | 2.0 | 96 | 0.0389 | 0.5349 | 0.5702 | 0.552 | 0.9902 |
| No log | 3.0 | 144 | 0.0346 | 0.7154 | 0.7273 | 0.7213 | 0.9932 |
| No log | 4.0 | 192 | 0.0355 | 0.7611 | 0.7107 | 0.7350 | 0.9937 |
| No log | 5.0 | 240 | 0.0375 | 0.7603 | 0.7603 | 0.7603 | 0.9939 |
| No log | 6.0 | 288 | 0.0376 | 0.7478 | 0.7107 | 0.7288 | 0.9937 |
| No log | 7.0 | 336 | 0.0414 | 0.7699 | 0.7190 | 0.7436 | 0.9939 |
| No log | 8.0 | 384 | 0.0427 | 0.7778 | 0.7521 | 0.7647 | 0.9942 |
| No log | 9.0 | 432 | 0.0432 | 0.8120 | 0.7851 | 0.7983 | 0.9947 |
| No log | 10.0 | 480 | 0.0438 | 0.7983 | 0.7851 | 0.7917 | 0.9947 |
| 0.0095 | 11.0 | 528 | 0.0441 | 0.8034 | 0.7769 | 0.7899 | 0.9944 |
| 0.0095 | 12.0 | 576 | 0.0442 | 0.8034 | 0.7769 | 0.7899 | 0.9944 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "TSC_classification_model", "results": []}]}
|
SiriusW/TSC_classification_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:54:16+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
TSC\_classification\_model
==========================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0442
* Precision: 0.8034
* Recall: 0.7769
* F1: 0.7899
* Accuracy: 0.9944
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
sentence-similarity
|
sentence-transformers
|
# Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45')
model = AutoModel.from_pretrained('Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1178 with parameters:
```
{'batch_size': 86, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`src.models.utils.custom_parts.CustomMultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 500,
"evaluator": "src.models.utils.custom_parts.CustomEmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["Santp98/query_generated-title-secop2"], "pipeline_tag": "sentence-similarity"}
|
Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45
| null |
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:Santp98/query_generated-title-secop2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:58:03+00:00
|
[] |
[] |
TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-Santp98/query_generated-title-secop2 #endpoints_compatible #region-us
|
# Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 1178 with parameters:
Loss:
'URL.custom_parts.CustomMultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1178 with parameters:\n\n\nLoss:\n\n'URL.custom_parts.CustomMultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-Santp98/query_generated-title-secop2 #endpoints_compatible #region-us \n",
"# Santp98/SBERT-pairs-bert-base-spanish-wwm-cased-2023-11-13-22-45\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 1178 with parameters:\n\n\nLoss:\n\n'URL.custom_parts.CustomMultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1231
- Accuracy: 0.69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 1.5999 | 0.558 |
| 1.85 | 2.0 | 500 | 1.2074 | 0.662 |
| 1.85 | 3.0 | 750 | 1.1231 | 0.69 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "my_awesome_qa_model", "results": []}]}
|
SaiSaketh/my_awesome_qa_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:58:12+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
my\_awesome\_qa\_model
======================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1231
* Accuracy: 0.69
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [gotchachurchkhela/SN6-23](https://huggingface.co/gotchachurchkhela/SN6-23)
* [tom-brady/sn6_200](https://huggingface.co/tom-brady/sn6_200)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: gotchachurchkhela/SN6-23
layer_range: [0, 24]
- model: tom-brady/sn6_200
layer_range: [0, 24]
merge_method: slerp
base_model: gotchachurchkhela/SN6-23
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["gotchachurchkhela/SN6-23", "tom-brady/sn6_200"]}
|
Sumail/Ame1
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:gotchachurchkhela/SN6-23",
"base_model:tom-brady/sn6_200",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T01:58:53+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-gotchachurchkhela/SN6-23 #base_model-tom-brady/sn6_200 #autotrain_compatible #endpoints_compatible #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* gotchachurchkhela/SN6-23
* tom-brady/sn6_200
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* gotchachurchkhela/SN6-23\n* tom-brady/sn6_200",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-gotchachurchkhela/SN6-23 #base_model-tom-brady/sn6_200 #autotrain_compatible #endpoints_compatible #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* gotchachurchkhela/SN6-23\n* tom-brady/sn6_200",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
zzttbrdd/sn6_6m
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T01:59:07+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# lust 7b
yeah yeah you get the drill its just the gargamels. proper quantizations coming sometime soon
|
{"license": "apache-2.0"}
|
Fizzarolli/lust-7b-GGUF
| null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T02:06:08+00:00
|
[] |
[] |
TAGS
#gguf #license-apache-2.0 #region-us
|
# lust 7b
yeah yeah you get the drill its just the gargamels. proper quantizations coming sometime soon
|
[
"# lust 7b\nyeah yeah you get the drill its just the gargamels. proper quantizations coming sometime soon"
] |
[
"TAGS\n#gguf #license-apache-2.0 #region-us \n",
"# lust 7b\nyeah yeah you get the drill its just the gargamels. proper quantizations coming sometime soon"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
yongsun-shim/eeve-8bit-test
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null |
2024-04-15T02:06:36+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [gotchachurchkhela/SN6-23](https://huggingface.co/gotchachurchkhela/SN6-23)
* [GamblerOnTrain/danke20a](https://huggingface.co/GamblerOnTrain/danke20a)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: gotchachurchkhela/SN6-23
layer_range: [0, 24]
- model: GamblerOnTrain/danke20a
layer_range: [0, 24]
merge_method: slerp
base_model: gotchachurchkhela/SN6-23
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["gotchachurchkhela/SN6-23", "GamblerOnTrain/danke20a"]}
|
Sumail/Ame2
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:gotchachurchkhela/SN6-23",
"base_model:GamblerOnTrain/danke20a",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:07:47+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-gotchachurchkhela/SN6-23 #base_model-GamblerOnTrain/danke20a #autotrain_compatible #endpoints_compatible #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* gotchachurchkhela/SN6-23
* GamblerOnTrain/danke20a
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* gotchachurchkhela/SN6-23\n* GamblerOnTrain/danke20a",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-gotchachurchkhela/SN6-23 #base_model-GamblerOnTrain/danke20a #autotrain_compatible #endpoints_compatible #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* gotchachurchkhela/SN6-23\n* GamblerOnTrain/danke20a",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-filtered
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-filtered", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.005-filtered
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T02:11:32+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-filtered
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.005-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-filtered\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
|
shaswatamitra/llama2-7b-chat-hf-finetuned2
| null |
[
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:12:53+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
|
[
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
[
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9332
- Recall: 0.9517
- F1: 0.9423
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0747 | 1.0 | 1756 | 0.0679 | 0.8990 | 0.9307 | 0.9146 | 0.9807 |
| 0.0346 | 2.0 | 3512 | 0.0641 | 0.9331 | 0.9478 | 0.9404 | 0.9857 |
| 0.0233 | 3.0 | 5268 | 0.0603 | 0.9332 | 0.9517 | 0.9423 | 0.9864 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner2", "results": []}]}
|
BrandonM001/bert-finetuned-ner2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:13:24+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-ner2
===================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0603
* Precision: 0.9332
* Recall: 0.9517
* F1: 0.9423
* Accuracy: 0.9864
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-image
|
diffusers
|
# Photography LoRA (XL)
<Gallery />




## Model description
PhotographyLoRa is trained with the Stable-Diffusion-xl base checkpoint
Base Model: SDXL 1.0
Training
STEPS: 1,445
EPOCHS: 10
Usage Tips
CLIP SKIP: 1
[Civit](https://civitai.com/models/366187/flowers-photography)
|
{"language": ["en"], "license": "openrail++", "library_name": "diffusers", "tags": ["stable-diffusion", "lora", "sdxl"], "pipeline_tag": "text-to-image", "base_model": "stabilityai/stable-diffusion-xl-base-1.0"}
|
f0ster/PhotographyLoRA
| null |
[
"diffusers",
"stable-diffusion",
"lora",
"sdxl",
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us",
"has_space"
] | null |
2024-04-15T02:15:41+00:00
|
[] |
[
"en"
] |
TAGS
#diffusers #stable-diffusion #lora #sdxl #text-to-image #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us #has_space
|
# Photography LoRA (XL)
<Gallery />
!image/png
!image/png
!image/png
!image/png
## Model description
PhotographyLoRa is trained with the Stable-Diffusion-xl base checkpoint
Base Model: SDXL 1.0
Training
STEPS: 1,445
EPOCHS: 10
Usage Tips
CLIP SKIP: 1
Civit
|
[
"# Photography LoRA (XL)\n\n<Gallery />\n\n!image/png\n\n!image/png\n\n!image/png\n\n!image/png",
"## Model description\n\nPhotographyLoRa is trained with the Stable-Diffusion-xl base checkpoint\n\nBase Model: SDXL 1.0\n\nTraining\nSTEPS: 1,445\nEPOCHS: 10\n\nUsage Tips\n\nCLIP SKIP: 1\n\nCivit"
] |
[
"TAGS\n#diffusers #stable-diffusion #lora #sdxl #text-to-image #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us #has_space \n",
"# Photography LoRA (XL)\n\n<Gallery />\n\n!image/png\n\n!image/png\n\n!image/png\n\n!image/png",
"## Model description\n\nPhotographyLoRa is trained with the Stable-Diffusion-xl base checkpoint\n\nBase Model: SDXL 1.0\n\nTraining\nSTEPS: 1,445\nEPOCHS: 10\n\nUsage Tips\n\nCLIP SKIP: 1\n\nCivit"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ai-er/llama-2-medi-dialog-mini-finetuned
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-15T02:17:00+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="anologicon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
anologicon/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-15T02:20:08+00:00
|
[] |
[] |
TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation
|
transformers
|
# karasu-1.1B-linear2
karasu-1.1B-merge1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [lightblue/karasu-1.1B](https://huggingface.co/lightblue/karasu-1.1B)
* [niryuu/Karasu-1.1b-chat-vector](https://huggingface.co/niryuu/Karasu-1.1b-chat-vector)
## 🧩 Configuration
```yaml
models:
- model: lightblue/karasu-1.1B
layer_range: [0, 22]
parameters:
weight: 0.1
- model: niryuu/Karasu-1.1b-chat-vector
layer_range: [0, 22]
parameters:
weight: 0.9
merge_method: linear
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/karasu-1.1B-merge1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"tags": ["merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector"], "base_model": ["lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector"]}
|
aipib/karasu-1.1B-linear2
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"lightblue/karasu-1.1B",
"niryuu/Karasu-1.1b-chat-vector",
"conversational",
"base_model:lightblue/karasu-1.1B",
"base_model:niryuu/Karasu-1.1b-chat-vector",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T02:20:13+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #niryuu/Karasu-1.1b-chat-vector #conversational #base_model-lightblue/karasu-1.1B #base_model-niryuu/Karasu-1.1b-chat-vector #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# karasu-1.1B-linear2
karasu-1.1B-merge1 is a merge of the following models using LazyMergekit:
* lightblue/karasu-1.1B
* niryuu/Karasu-1.1b-chat-vector
## Configuration
## Usage
|
[
"# karasu-1.1B-linear2\n\nkarasu-1.1B-merge1 is a merge of the following models using LazyMergekit:\n* lightblue/karasu-1.1B\n* niryuu/Karasu-1.1b-chat-vector",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #niryuu/Karasu-1.1b-chat-vector #conversational #base_model-lightblue/karasu-1.1B #base_model-niryuu/Karasu-1.1b-chat-vector #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# karasu-1.1B-linear2\n\nkarasu-1.1B-merge1 is a merge of the following models using LazyMergekit:\n* lightblue/karasu-1.1B\n* niryuu/Karasu-1.1b-chat-vector",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
"""this is my second attempt at converting a model float16 quantized model to 1.5bit. i used my model liminerity/M7-7b for the base model and
trained on: abideen/cosmopedia-100k-pretain dataset and used his google colab project to make this"""
#EXAMPLE INFERENCE CODE FROM ABIDEEN'S COLAB PROJECT
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import *
# Load a pretrained BitNet model
model = "liminerity/Bitnet-M7-70M"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model)
def activation_quant(x):
scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
y = (x * scale).round().clamp_(-128, 127)
y = y / scale
return y
def weight_quant(w):
scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
u = (w * scale).round().clamp_(-1, 1)
u = u / scale
return u
class BitLinear(nn.Linear):
def forward(self, x):
w = self.weight # a weight tensor with shape [d, k]
x = x.to(w.device)
RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
x_norm = RMSNorm(x)
# A trick for implementing Straight−Through−Estimator (STE) using detach()
x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
w_quant = w + (weight_quant(w) - w).detach()
y = F.linear(x_quant, w_quant)
return y
def convert_to_bitnet(model, copy_weights):
for name, module in model.named_modules():
# Replace linear layers with BitNet
if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
for child_name, child_module in module.named_children():
if isinstance(child_module, nn.Linear):
bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
if copy_weights:
bitlinear.weight = child_module.weight
if child_module.bias is not None:
bitlinear.bias = child_module.bias
setattr(module, child_name, bitlinear)
# Remove redundant input_layernorms
elif isinstance(module, LlamaDecoderLayer):
for child_name, child_module in module.named_children():
if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
setattr(module, child_name, nn.Identity().to(device="cuda:0"))
convert_to_bitnet(model, copy_weights=True)
model.to(device="cuda:0")
prompt = "What is Machine Learning?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
generate_ids = model.generate(inputs.input_ids, max_length=50)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
|
{"tags": ["Mistral", "1bit", "bitnet", "abideen", "M7", "Liminerity"], "datasets": ["abideen/Cosmopedia-100k-pretrain"]}
|
liminerity/Bitnet-M7-70m
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Mistral",
"1bit",
"bitnet",
"abideen",
"M7",
"Liminerity",
"dataset:abideen/Cosmopedia-100k-pretrain",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T02:21:00+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #Mistral #1bit #bitnet #abideen #M7 #Liminerity #dataset-abideen/Cosmopedia-100k-pretrain #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
"""this is my second attempt at converting a model float16 quantized model to 1.5bit. i used my model liminerity/M7-7b for the base model and
trained on: abideen/cosmopedia-100k-pretain dataset and used his google colab project to make this"""
#EXAMPLE INFERENCE CODE FROM ABIDEEN'S COLAB PROJECT
|
[] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #Mistral #1bit #bitnet #abideen #M7 #Liminerity #dataset-abideen/Cosmopedia-100k-pretrain #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Used dataset for fine-tuning
- sahil2801/CodeAlpaca-20k
- m-a-p/CodeFeedback-Filtered-Instruction
|
{"license": "apache-2.0"}
|
upstage/TinySolar-248m-4k-code-instruct
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T02:21:14+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Used dataset for fine-tuning
- sahil2801/CodeAlpaca-20k
- m-a-p/CodeFeedback-Filtered-Instruction
|
[
"# Used dataset for fine-tuning\n- sahil2801/CodeAlpaca-20k\n- m-a-p/CodeFeedback-Filtered-Instruction"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Used dataset for fine-tuning\n- sahil2801/CodeAlpaca-20k\n- m-a-p/CodeFeedback-Filtered-Instruction"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tom-brady/sn6_247
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:21:17+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<div style="width: 100%;">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
This is a 4-bit EXL2 version of the [tigerbot-70b-chat-v6](https://huggingface.co/TigerResearch/tigerbot-70b-chat-v6).
It was quantized to 4bit using: https://github.com/turboderp/exllamav2
## How to download and use this model in github: https://github.com/TigerResearch/TigerBot
Here are commands to clone the TigerBot and install.
```
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
```
Inference with command line interface
infer with exllamav2
```
# install exllamav2
git clone https://github.com/turboderp/exllamav2
cd exllamav2
pip install -r requirements.txt
# infer command
CUDA_VISIBLE_DEVICES=0 python other_infer/exllamav2_hf_infer.py --model_path TigerResearch/tigerbot-70b-chat-v6-4bit-exl2
```
|
{"license": "apache-2.0"}
|
TigerResearch/tigerbot-70b-chat-v6-4bit-exl2
| null |
[
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T02:21:52+00:00
|
[] |
[] |
TAGS
#transformers #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<div style="width: 100%;">
<img src="URL alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
<a href="URL target="_blank">TigerBot</a> • <a href="URL target="_blank">Hugging Face</a>
</p>
This is a 4-bit EXL2 version of the tigerbot-70b-chat-v6.
It was quantized to 4bit using: URL
## How to download and use this model in github: URL
Here are commands to clone the TigerBot and install.
Inference with command line interface
infer with exllamav2
|
[
"## How to download and use this model in github: URL\n\nHere are commands to clone the TigerBot and install.\n\n\n\nInference with command line interface\n\ninfer with exllamav2"
] |
[
"TAGS\n#transformers #llama #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## How to download and use this model in github: URL\n\nHere are commands to clone the TigerBot and install.\n\n\n\nInference with command line interface\n\ninfer with exllamav2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-6.7B-schema_2
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0235 | 0.19 | 50 | 0.2107 |
| 0.0528 | 0.38 | 100 | 0.1890 |
| 0.055 | 0.57 | 150 | 0.1867 |
| 0.053 | 0.76 | 200 | 0.1722 |
| 0.0843 | 0.95 | 250 | 0.1718 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "deepseek-ai/deepseek-coder-6.7b-instruct", "model-index": [{"name": "DS-6.7B-schema_2", "results": []}]}
|
jdeklerk10/DS-6.7B-schema_2
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null |
2024-04-15T02:22:29+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #license-other #region-us
|
DS-6.7B-schema\_2
=================
This model is a fine-tuned version of deepseek-ai/deepseek-coder-6.7b-instruct on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1718
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.01
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #license-other #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.01\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
yongsun-shim/eeve-4bit-test
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-15T02:25:48+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "286.13 +/- 16.13", "name": "mean_reward", "verified": false}]}]}]}
|
ahforoughi/PPO-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T02:28:38+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# Vezora/Mistral-22B-v0.1 AWQ
- Model creator: [Vezora](https://huggingface.co/Vezora)
- Original model: [Mistral-22B-v0.2](https://huggingface.co/Vezora/Mistral-22B-v0.2)
## Model Summary
- Just two days after our release of **Mistral-22b-v0.1**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-v.02**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
- v0.2 has trained on 8x more data than v0.1!
## How to use
**GUANACO PROMPT FORMAT** YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results.
- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
- "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
|
{"language": ["en"], "license": "apache-2.0", "tags": ["quantized", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "text-generation-inference"], "model_name": "Mistral-22B-v0.2", "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model_creator": "Vezora", "model_type": "mistral", "pipeline_tag": "text-generation", "inference": false}
|
solidrust/Mistral-22B-v0.2-AWQ
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"quantized",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"en",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T02:28:46+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #en #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #region-us
|
# Vezora/Mistral-22B-v0.1 AWQ
- Model creator: Vezora
- Original model: Mistral-22B-v0.2
## Model Summary
- Just two days after our release of Mistral-22b-v0.1, we are excited to introduce our handcrafted experimental model, Mistral-22b-v.02. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
- v0.2 has trained on 8x more data than v0.1!
## How to use
GUANACO PROMPT FORMAT YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results.
- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
- "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
|
[
"# Vezora/Mistral-22B-v0.1 AWQ\n\n- Model creator: Vezora\n- Original model: Mistral-22B-v0.2",
"## Model Summary\n\n- Just two days after our release of Mistral-22b-v0.1, we are excited to introduce our handcrafted experimental model, Mistral-22b-v.02. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.\n- v0.2 has trained on 8x more data than v0.1!",
"## How to use\n\nGUANACO PROMPT FORMAT YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results.\n\n- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:\n- \"### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe...\""
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #quantized #4-bit #AWQ #autotrain_compatible #endpoints_compatible #text-generation-inference #en #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #region-us \n",
"# Vezora/Mistral-22B-v0.1 AWQ\n\n- Model creator: Vezora\n- Original model: Mistral-22B-v0.2",
"## Model Summary\n\n- Just two days after our release of Mistral-22b-v0.1, we are excited to introduce our handcrafted experimental model, Mistral-22b-v.02. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.\n- v0.2 has trained on 8x more data than v0.1!",
"## How to use\n\nGUANACO PROMPT FORMAT YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELOW. Not using this prompt format will lead to sub optimal results.\n\n- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:\n- \"### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe...\""
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0253
- Accuracy: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 214 | 0.0540 | 0.9905 |
| No log | 2.0 | 428 | 0.0606 | 0.9932 |
| 0.0648 | 3.0 | 642 | 0.0253 | 0.9973 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.1
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google-bert/bert-base-chinese", "model-index": [{"name": "test_trainer", "results": []}]}
|
Extrabass/test_trainer
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:29:56+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-chinese #autotrain_compatible #endpoints_compatible #region-us
|
test\_trainer
=============
This model is a fine-tuned version of google-bert/bert-base-chinese on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0253
* Accuracy: 0.9973
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-google-bert/bert-base-chinese #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ibivibiv/collosus_120b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/collosus_120b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ1_S.gguf) | i1-IQ1_S | 24.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ1_M.gguf) | i1-IQ1_M | 27.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 31.2 | |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 34.7 | |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ2_S.gguf) | i1-IQ2_S | 36.5 | |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ2_M.gguf) | i1-IQ2_M | 39.7 | |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q2_K.gguf) | i1-Q2_K | 43.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 45.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 48.2 | |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 50.8 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 51.0 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 52.7 | |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 56.7 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 61.8 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 62.9 | |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 66.7 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 66.9 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 70.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 81.1 | |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 83.3 | |
| [PART 1](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/collosus_120b-i1-GGUF/resolve/main/collosus_120b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 96.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "ibivibiv/collosus_120b", "quantized_by": "mradermacher"}
|
mradermacher/collosus_120b-i1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:ibivibiv/collosus_120b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:30:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-ibivibiv/collosus_120b #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-ibivibiv/collosus_120b #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null | null |
LoRA adapter files for https://huggingface.co/tdrussell/Mixtral-8x22B-Capyboros-v1
|
{"license": "apache-2.0"}
|
tdrussell/Mixtral-8x22B-Capyboros-v1-lora
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T02:30:26+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
LoRA adapter files for URL
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
null | null |
q4_k_s quant for https://huggingface.co/tdrussell/Mixtral-8x22B-Capyboros-v1. These files are split using the gguf-split tool. If you want to recombine them to single file, you MUST use that tool, NOT cat.
|
{"license": "apache-2.0"}
|
tdrussell/Mixtral-8x22B-Capyboros-v1-GGUF-q4_k_s
| null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T02:32:22+00:00
|
[] |
[] |
TAGS
#gguf #license-apache-2.0 #region-us
|
q4_k_s quant for URL These files are split using the gguf-split tool. If you want to recombine them to single file, you MUST use that tool, NOT cat.
|
[] |
[
"TAGS\n#gguf #license-apache-2.0 #region-us \n"
] |
null | null |
<div style="width: 100%;">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
This is a 4-bit EXL2 version of the [tigerbot-13b-chat-v6](https://huggingface.co/TigerResearch/tigerbot-13b-chat-v6).
It was quantized to 4bit using: https://github.com/turboderp/exllamav2
## How to download and use this model in github: https://github.com/TigerResearch/TigerBot
Here are commands to clone the TigerBot and install.
```
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
```
Inference with command line interface
infer with exllamav2
```
# install exllamav2
git clone https://github.com/turboderp/exllamav2
cd exllamav2
pip install -r requirements.txt
# infer command
CUDA_VISIBLE_DEVICES=0 python other_infer/exllamav2_hf_infer.py --model_path TigerResearch/tigerbot-13b-chat-v6-4bit-exl2
```
|
{"license": "apache-2.0"}
|
TigerResearch/tigerbot-13b-chat-v6-4bit-exl2
| null |
[
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T02:33:01+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
<div style="width: 100%;">
<img src="URL alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
<a href="URL target="_blank">TigerBot</a> • <a href="URL target="_blank">Hugging Face</a>
</p>
This is a 4-bit EXL2 version of the tigerbot-13b-chat-v6.
It was quantized to 4bit using: URL
## How to download and use this model in github: URL
Here are commands to clone the TigerBot and install.
Inference with command line interface
infer with exllamav2
|
[
"## How to download and use this model in github: URL\n\nHere are commands to clone the TigerBot and install.\n\n\n\nInference with command line interface\n\ninfer with exllamav2"
] |
[
"TAGS\n#license-apache-2.0 #region-us \n",
"## How to download and use this model in github: URL\n\nHere are commands to clone the TigerBot and install.\n\n\n\nInference with command line interface\n\ninfer with exllamav2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
cilantro9246/m3bryby
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:33:22+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# This is prediction for Suicide and Non-Suicide: Label-1 is Suicide and Label-0 is Non-Suicide.
# Transformers_Project
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1389
- Accuracy: 0.9672
- F1: 0.9672
- Precision: 0.9676
- Recall: 0.9667
- Zero One Loss: 0.0328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Zero One Loss |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|
| 0.2495 | 1.0 | 875 | 0.1397 | 0.9552 | 0.9563 | 0.9320 | 0.982 | 0.0448 |
| 0.0865 | 2.0 | 1750 | 0.1163 | 0.9692 | 0.9692 | 0.9696 | 0.9687 | 0.0308 |
| 0.0344 | 3.0 | 2625 | 0.1389 | 0.9672 | 0.9672 | 0.9676 | 0.9667 | 0.0328 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-cased", "model-index": [{"name": "Transformers_Project", "results": []}]}
|
MuradA/Transformers_Project
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:37:42+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This is prediction for Suicide and Non-Suicide: Label-1 is Suicide and Label-0 is Non-Suicide.
==============================================================================================
Transformers\_Project
=====================
This model is a fine-tuned version of distilbert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1389
* Accuracy: 0.9672
* F1: 0.9672
* Precision: 0.9676
* Recall: 0.9667
* Zero One Loss: 0.0328
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-to-speech
|
tensorflowtts
|
# LightSpeech MFA SW v1
LightSpeech MFA SW v1 is a text-to-mel-spectrogram model based on the [LightSpeech](https://arxiv.org/abs/2102.04040) architecture. This model was trained from scratch on a real audio dataset. The list of real speakers include:
- sw-KE-OpenBible
We trained an acoustic Swahili model on our speech corpus using [Montreal Forced Aligner v2.0.0](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) and used it as the duration extractor. That model, and consequently our model, uses the IPA phone set for Swahili. We used [gruut](https://github.com/rhasspy/gruut) for phonemization purposes. We followed these [steps](https://github.com/TensorSpeech/TensorFlowTTS/tree/master/examples/mfa_extraction) to perform duration extraction.
This model was trained using the [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS) framework. All training was done on a Scaleway RENDER-S VM with a Tesla P100 GPU. All necessary scripts used for training could be found in this [Github Fork](https://github.com/bookbot-hive/TensorFlowTTS), as well as the [Training metrics](https://huggingface.co/bookbot/lightspeech-mfa-sw-v1/tensorboard) logged via Tensorboard.
## Model
| Model | Config | SR (Hz) | Mel range (Hz) | FFT / Hop / Win (pt) | #steps |
| ----------------------- | --------------------------------------------------------------------------------- | ------- | -------------- | -------------------- | ------ |
| `lightspeech-mfa-sw-v1` | [Link](https://huggingface.co/bookbot/lightspeech-mfa-sw-v1/blob/main/config.yml) | 44.1K | 20-11025 | 2048 / 512 / None | 200K |
## Training Procedure
<details>
<summary>Feature Extraction Setting</summary>
hop_size: 512 # Hop size.
format: "npy"
</details>
<details>
<summary>Network Architecture Setting</summary>
model_type: lightspeech
lightspeech_params:
dataset: "swahiliipa"
n_speakers: 1
encoder_hidden_size: 256
encoder_num_hidden_layers: 3
encoder_num_attention_heads: 2
encoder_attention_head_size: 16
encoder_intermediate_size: 1024
encoder_intermediate_kernel_size:
- 5
- 25
- 13
- 9
encoder_hidden_act: "mish"
decoder_hidden_size: 256
decoder_num_hidden_layers: 3
decoder_num_attention_heads: 2
decoder_attention_head_size: 16
decoder_intermediate_size: 1024
decoder_intermediate_kernel_size:
- 17
- 21
- 9
- 13
decoder_hidden_act: "mish"
variant_prediction_num_conv_layers: 2
variant_predictor_filter: 256
variant_predictor_kernel_size: 3
variant_predictor_dropout_rate: 0.5
num_mels: 80
hidden_dropout_prob: 0.2
attention_probs_dropout_prob: 0.1
max_position_embeddings: 2048
initializer_range: 0.02
output_attentions: False
output_hidden_states: False
</details>
<details>
<summary>Data Loader Setting</summary>
batch_size: 8 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.
eval_batch_size: 16
remove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.
allow_cache: true # Whether to allow cache in dataset. If true, it requires cpu memory.
mel_length_threshold: 32 # remove all targets has mel_length <= 32
is_shuffle: true # shuffle dataset after each epoch.
</details>
<details>
<summary>Optimizer & Scheduler Setting</summary>
optimizer_params:
initial_learning_rate: 0.0001
end_learning_rate: 0.00005
decay_steps: 150000 # < train_max_steps is recommend.
warmup_proportion: 0.02
weight_decay: 0.001
gradient_accumulation_steps: 2
var_train_expr:
null # trainable variable expr (eg. 'embeddings|encoder|decoder' )
# must separate by |. if var_train_expr is null then we
# training all variable
</details>
<details>
<summary>Interval Setting</summary>
train_max_steps: 200000 # Number of training steps.
save_interval_steps: 5000 # Interval steps to save checkpoint.
eval_interval_steps: 5000 # Interval steps to evaluate the network.
log_interval_steps: 200 # Interval steps to record the training log.
delay_f0_energy_steps: 3 # 2 steps use LR outputs only then 1 steps LR + F0 + Energy.
</details>
<details>
<summary>Other Setting</summary>
num_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.
</details>
## How to Use
```py
import tensorflow as tf
from tensorflow_tts.inference import TFAutoModel, AutoProcessor
lightspeech = TFAutoModel.from_pretrained("bookbot/lightspeech-mfa-sw-v1")
processor = AutoProcessor.from_pretrained("bookbot/lightspeech-mfa-sw-v1")
text, speaker_name = "Hello World", "sw-KE-OpenBible"
input_ids = processor.text_to_sequence(text)
mel, duration_outputs, _ = lightspeech.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor(
[processor.speakers_map[speaker_name]], dtype=tf.int32
),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
)
```
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
LightSpeech MFA SW v1 was trained and evaluated by [David Samuel Setiawan](https://davidsamuell.github.io/), [Wilson Wongso](https://wilsonwongso.dev/). All computation and development are done on Scaleway.
## Framework versions
- TensorFlowTTS 1.8
- TensorFlow 2.7.0
|
{"language": "sw", "license": "cc-by-sa-4.0", "tags": ["tensorflowtts", "audio", "text-to-speech", "text-to-mel"], "datasets": ["bookbot/OpenBible_Swahili"], "inference": false}
|
bookbot/lightspeech-mfa-sw-v1
| null |
[
"tensorflowtts",
"tflite",
"tensorboard",
"onnx",
"audio",
"text-to-speech",
"text-to-mel",
"sw",
"dataset:bookbot/OpenBible_Swahili",
"arxiv:2102.04040",
"license:cc-by-sa-4.0",
"region:us"
] | null |
2024-04-15T02:38:09+00:00
|
[
"2102.04040"
] |
[
"sw"
] |
TAGS
#tensorflowtts #tflite #tensorboard #onnx #audio #text-to-speech #text-to-mel #sw #dataset-bookbot/OpenBible_Swahili #arxiv-2102.04040 #license-cc-by-sa-4.0 #region-us
|
LightSpeech MFA SW v1
=====================
LightSpeech MFA SW v1 is a text-to-mel-spectrogram model based on the LightSpeech architecture. This model was trained from scratch on a real audio dataset. The list of real speakers include:
* sw-KE-OpenBible
We trained an acoustic Swahili model on our speech corpus using Montreal Forced Aligner v2.0.0 and used it as the duration extractor. That model, and consequently our model, uses the IPA phone set for Swahili. We used gruut for phonemization purposes. We followed these steps to perform duration extraction.
This model was trained using the TensorFlowTTS framework. All training was done on a Scaleway RENDER-S VM with a Tesla P100 GPU. All necessary scripts used for training could be found in this Github Fork, as well as the Training metrics logged via Tensorboard.
Model
-----
Training Procedure
------------------
Feature Extraction Setting
```
hop_size: 512 # Hop size.
format: "npy"
```
Network Architecture Setting
```
model_type: lightspeech
lightspeech_params:
dataset: "swahiliipa"
n_speakers: 1
encoder_hidden_size: 256
encoder_num_hidden_layers: 3
encoder_num_attention_heads: 2
encoder_attention_head_size: 16
encoder_intermediate_size: 1024
encoder_intermediate_kernel_size:
- 5
- 25
- 13
- 9
encoder_hidden_act: "mish"
decoder_hidden_size: 256
decoder_num_hidden_layers: 3
decoder_num_attention_heads: 2
decoder_attention_head_size: 16
decoder_intermediate_size: 1024
decoder_intermediate_kernel_size:
- 17
- 21
- 9
- 13
decoder_hidden_act: "mish"
variant_prediction_num_conv_layers: 2
variant_predictor_filter: 256
variant_predictor_kernel_size: 3
variant_predictor_dropout_rate: 0.5
num_mels: 80
hidden_dropout_prob: 0.2
attention_probs_dropout_prob: 0.1
max_position_embeddings: 2048
initializer_range: 0.02
output_attentions: False
output_hidden_states: False
```
Data Loader Setting
```
batch_size: 8 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.
eval_batch_size: 16
remove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.
allow_cache: true # Whether to allow cache in dataset. If true, it requires cpu memory.
mel_length_threshold: 32 # remove all targets has mel_length <= 32
is_shuffle: true # shuffle dataset after each epoch.
```
Optimizer & Scheduler Setting
```
optimizer_params:
initial_learning_rate: 0.0001
end_learning_rate: 0.00005
decay_steps: 150000 # < train_max_steps is recommend.
warmup_proportion: 0.02
weight_decay: 0.001
gradient_accumulation_steps: 2
var_train_expr:
null # trainable variable expr (eg. 'embeddings|encoder|decoder' )
# must separate by |. if var_train_expr is null then we
# training all variable
```
Interval Setting
```
train_max_steps: 200000 # Number of training steps.
save_interval_steps: 5000 # Interval steps to save checkpoint.
eval_interval_steps: 5000 # Interval steps to evaluate the network.
log_interval_steps: 200 # Interval steps to record the training log.
delay_f0_energy_steps: 3 # 2 steps use LR outputs only then 1 steps LR + F0 + Energy.
```
Other Setting
```
num_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.
```
How to Use
----------
Disclaimer
----------
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
Authors
-------
LightSpeech MFA SW v1 was trained and evaluated by David Samuel Setiawan, Wilson Wongso. All computation and development are done on Scaleway.
Framework versions
------------------
* TensorFlowTTS 1.8
* TensorFlow 2.7.0
|
[
"# Hop size.\nformat: \"npy\"\n\n```\n\n\n\nNetwork Architecture Setting\n\n```\nmodel_type: lightspeech\nlightspeech_params:\n dataset: \"swahiliipa\"\n n_speakers: 1\n encoder_hidden_size: 256\n encoder_num_hidden_layers: 3\n encoder_num_attention_heads: 2\n encoder_attention_head_size: 16\n encoder_intermediate_size: 1024\n encoder_intermediate_kernel_size:\n - 5\n - 25\n - 13\n - 9\n encoder_hidden_act: \"mish\"\n decoder_hidden_size: 256\n decoder_num_hidden_layers: 3\n decoder_num_attention_heads: 2\n decoder_attention_head_size: 16\n decoder_intermediate_size: 1024\n decoder_intermediate_kernel_size:\n - 17\n - 21\n - 9\n - 13\n decoder_hidden_act: \"mish\"\n variant_prediction_num_conv_layers: 2\n variant_predictor_filter: 256\n variant_predictor_kernel_size: 3\n variant_predictor_dropout_rate: 0.5\n num_mels: 80\n hidden_dropout_prob: 0.2\n attention_probs_dropout_prob: 0.1\n max_position_embeddings: 2048\n initializer_range: 0.02\n output_attentions: False\n output_hidden_states: False\n\n```\n\n\n\nData Loader Setting\n\n```\nbatch_size: 8 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.\neval_batch_size: 16\nremove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.\nallow_cache: true # Whether to allow cache in dataset. If true, it requires cpu memory.\nmel_length_threshold: 32 # remove all targets has mel_length <= 32\nis_shuffle: true # shuffle dataset after each epoch.\n\n```\n\n\n\nOptimizer & Scheduler Setting\n\n```\noptimizer_params:\n initial_learning_rate: 0.0001\n end_learning_rate: 0.00005\n decay_steps: 150000 # < train_max_steps is recommend.\n warmup_proportion: 0.02\n weight_decay: 0.001\n\ngradient_accumulation_steps: 2\nvar_train_expr:\n null # trainable variable expr (eg. 'embeddings|encoder|decoder' )\n # must separate by |. if var_train_expr is null then we\n # training all variable\n\n```\n\n\n\nInterval Setting\n\n```\ntrain_max_steps: 200000 # Number of training steps.\nsave_interval_steps: 5000 # Interval steps to save checkpoint.\neval_interval_steps: 5000 # Interval steps to evaluate the network.\nlog_interval_steps: 200 # Interval steps to record the training log.\ndelay_f0_energy_steps: 3 # 2 steps use LR outputs only then 1 steps LR + F0 + Energy.\n\n```\n\n\n\nOther Setting\n\n```\nnum_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.\n\n```\n\n\nHow to Use\n----------\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nLightSpeech MFA SW v1 was trained and evaluated by David Samuel Setiawan, Wilson Wongso. All computation and development are done on Scaleway.\n\n\nFramework versions\n------------------\n\n\n* TensorFlowTTS 1.8\n* TensorFlow 2.7.0"
] |
[
"TAGS\n#tensorflowtts #tflite #tensorboard #onnx #audio #text-to-speech #text-to-mel #sw #dataset-bookbot/OpenBible_Swahili #arxiv-2102.04040 #license-cc-by-sa-4.0 #region-us \n",
"# Hop size.\nformat: \"npy\"\n\n```\n\n\n\nNetwork Architecture Setting\n\n```\nmodel_type: lightspeech\nlightspeech_params:\n dataset: \"swahiliipa\"\n n_speakers: 1\n encoder_hidden_size: 256\n encoder_num_hidden_layers: 3\n encoder_num_attention_heads: 2\n encoder_attention_head_size: 16\n encoder_intermediate_size: 1024\n encoder_intermediate_kernel_size:\n - 5\n - 25\n - 13\n - 9\n encoder_hidden_act: \"mish\"\n decoder_hidden_size: 256\n decoder_num_hidden_layers: 3\n decoder_num_attention_heads: 2\n decoder_attention_head_size: 16\n decoder_intermediate_size: 1024\n decoder_intermediate_kernel_size:\n - 17\n - 21\n - 9\n - 13\n decoder_hidden_act: \"mish\"\n variant_prediction_num_conv_layers: 2\n variant_predictor_filter: 256\n variant_predictor_kernel_size: 3\n variant_predictor_dropout_rate: 0.5\n num_mels: 80\n hidden_dropout_prob: 0.2\n attention_probs_dropout_prob: 0.1\n max_position_embeddings: 2048\n initializer_range: 0.02\n output_attentions: False\n output_hidden_states: False\n\n```\n\n\n\nData Loader Setting\n\n```\nbatch_size: 8 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.\neval_batch_size: 16\nremove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.\nallow_cache: true # Whether to allow cache in dataset. If true, it requires cpu memory.\nmel_length_threshold: 32 # remove all targets has mel_length <= 32\nis_shuffle: true # shuffle dataset after each epoch.\n\n```\n\n\n\nOptimizer & Scheduler Setting\n\n```\noptimizer_params:\n initial_learning_rate: 0.0001\n end_learning_rate: 0.00005\n decay_steps: 150000 # < train_max_steps is recommend.\n warmup_proportion: 0.02\n weight_decay: 0.001\n\ngradient_accumulation_steps: 2\nvar_train_expr:\n null # trainable variable expr (eg. 'embeddings|encoder|decoder' )\n # must separate by |. if var_train_expr is null then we\n # training all variable\n\n```\n\n\n\nInterval Setting\n\n```\ntrain_max_steps: 200000 # Number of training steps.\nsave_interval_steps: 5000 # Interval steps to save checkpoint.\neval_interval_steps: 5000 # Interval steps to evaluate the network.\nlog_interval_steps: 200 # Interval steps to record the training log.\ndelay_f0_energy_steps: 3 # 2 steps use LR outputs only then 1 steps LR + F0 + Energy.\n\n```\n\n\n\nOther Setting\n\n```\nnum_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.\n\n```\n\n\nHow to Use\n----------\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nLightSpeech MFA SW v1 was trained and evaluated by David Samuel Setiawan, Wilson Wongso. All computation and development are done on Scaleway.\n\n\nFramework versions\n------------------\n\n\n* TensorFlowTTS 1.8\n* TensorFlow 2.7.0"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-1.1-7b-it", "model-index": [{"name": "outputs", "results": []}]}
|
aidiary/gemma-7b-finetune-gozarinnemon
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-1.1-7b-it",
"license:gemma",
"region:us"
] | null |
2024-04-15T02:43:50+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-1.1-7b-it #license-gemma #region-us
|
# outputs
This model is a fine-tuned version of google/gemma-1.1-7b-it on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# outputs\n\nThis model is a fine-tuned version of google/gemma-1.1-7b-it on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 500",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-1.1-7b-it #license-gemma #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of google/gemma-1.1-7b-it on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 500",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
{"library_name": "peft", "base_model": "vilsonrodrigues/falcon-7b-instruct-sharded"}
|
deepaknh/falcon7B_FineTuning_ReExperiment_1_QLORA_7perParam_ILR_increased_v4
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null |
2024-04-15T02:45:16+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #arxiv-1910.09700 #base_model-vilsonrodrigues/falcon-7b-instruct-sharded #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.6.1"
] |
[
"TAGS\n#peft #arxiv-1910.09700 #base_model-vilsonrodrigues/falcon-7b-instruct-sharded #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: float16",
"### Framework versions\n\n\n- PEFT 0.6.1"
] |
reinforcement-learning
|
stable-baselines3
|
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga APLunch -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga APLunch -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga APLunch
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "654.00 +/- 223.01", "name": "mean_reward", "verified": false}]}]}]}
|
APLunch/dqn-SpaceInvadersNoFrameskip-v4
| null |
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T02:45:29+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# DQN Agent playing SpaceInvadersNoFrameskip-v4
This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4
using the stable-baselines3 library
and the RL Zoo.
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: URL
SB3: URL
SB3 Contrib: URL
Install the RL Zoo (with SB3 and SB3-Contrib):
If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:
## Training (with the RL Zoo)
## Hyperparameters
# Environment Arguments
|
[
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
[
"TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
text-to-speech
|
tensorflowtts
|
# MB-MelGAN HiFi PostNets SW v1
MB-MelGAN HiFi PostNets SW v1 is a mel-to-wav model based on the [MB-MelGAN](https://arxiv.org/abs/2005.05106) architecture with [HiFi-GAN](https://arxiv.org/abs/2010.05646) discriminator. This model was trained from scratch on a synthetic audio dataset. Instead of training on ground truth waveform spectrograms, this model was trained on the generated PostNet spectrograms of [LightSpeech MFA SW v1](https://huggingface.co/bookbot/lightspeech-mfa-sw-v1). The list of real speakers include:
- sw-KE-OpenBible
This model was trained using the [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS) framework. All training was done on a Scaleway RENDER-S VM with a Tesla P100 GPU. All necessary scripts used for training could be found in this [Github Fork](https://github.com/bookbot-hive/TensorFlowTTS), as well as the [Training metrics](https://huggingface.co/bookbot/mb-melgan-hifi-postnets-sw-v1/tensorboard) logged via Tensorboard.
## Model
| Model | Config | SR (Hz) | Mel range (Hz) | FFT / Hop / Win (pt) | #steps |
| ------------------------------- | ----------------------------------------------------------------------------------------- | ------- | -------------- | -------------------- | ------ |
| `mb-melgan-hifi-postnets-sw-v1` | [Link](https://huggingface.co/bookbot/mb-melgan-hifi-postnets-sw-v1/blob/main/config.yml) | 44.1K | 20-11025 | 2048 / 512 / None | 1M |
## Training Procedure
<details>
<summary>Feature Extraction Setting</summary>
sampling_rate: 44100
hop_size: 512 # Hop size.
format: "npy"
</details>
<details>
<summary>Generator Network Architecture Setting</summary>
model_type: "multiband_melgan_generator"
multiband_melgan_generator_params:
out_channels: 4 # Number of output channels (number of subbands).
kernel_size: 7 # Kernel size of initial and final conv layers.
filters: 384 # Initial number of channels for conv layers.
upsample_scales: [8, 4, 4] # List of Upsampling scales.
stack_kernel_size: 3 # Kernel size of dilated conv layers in residual stack.
stacks: 4 # Number of stacks in a single residual stack module.
is_weight_norm: false # Use weight-norm or not.
</details>
<details>
<summary>Discriminator Network Architecture Setting</summary>
multiband_melgan_discriminator_params:
out_channels: 1 # Number of output channels.
scales: 3 # Number of multi-scales.
downsample_pooling: "AveragePooling1D" # Pooling type for the input downsampling.
downsample_pooling_params: # Parameters of the above pooling function.
pool_size: 4
strides: 2
kernel_sizes: [5, 3] # List of kernel size.
filters: 16 # Number of channels of the initial conv layer.
max_downsample_filters: 512 # Maximum number of channels of downsampling layers.
downsample_scales: [4, 4, 4] # List of downsampling scales.
nonlinear_activation: "LeakyReLU" # Nonlinear activation function.
nonlinear_activation_params: # Parameters of nonlinear activation function.
alpha: 0.2
is_weight_norm: false # Use weight-norm or not.
hifigan_discriminator_params:
out_channels: 1 # Number of output channels (number of subbands).
period_scales: [3, 5, 7, 11, 17, 23, 37] # List of period scales.
n_layers: 5 # Number of layer of each period discriminator.
kernel_size: 5 # Kernel size.
strides: 3 # Strides
filters: 8 # In Conv filters of each period discriminator
filter_scales: 4 # Filter scales.
max_filters: 512 # maximum filters of period discriminator's conv.
is_weight_norm: false # Use weight-norm or not.
</details>
<details>
<summary>STFT Loss Setting</summary>
stft_loss_params:
fft_lengths: [1024, 2048, 512] # List of FFT size for STFT-based loss.
frame_steps: [120, 240, 50] # List of hop size for STFT-based loss
frame_lengths: [600, 1200, 240] # List of window length for STFT-based loss.
subband_stft_loss_params:
fft_lengths: [384, 683, 171] # List of FFT size for STFT-based loss.
frame_steps: [30, 60, 10] # List of hop size for STFT-based loss
frame_lengths: [150, 300, 60] # List of window length for STFT-based loss.
</details>
<details>
<summary>Adversarial Loss Setting</summary>
lambda_feat_match: 10.0 # Loss balancing coefficient for feature matching loss
lambda_adv: 2.5 # Loss balancing coefficient for adversarial loss.
</details>
<details>
<summary>Data Loader Setting</summary>
batch_size: 32 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.
eval_batch_size: 16
batch_max_steps: 8192 # Length of each audio in batch for training. Make sure dividable by hop_size.
batch_max_steps_valid: 8192 # Length of each audio for validation. Make sure dividable by hope_size.
remove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.
allow_cache: false # Whether to allow cache in dataset. If true, it requires cpu memory.
is_shuffle: false # shuffle dataset after each epoch.
</details>
<details>
<summary>Optimizer & Scheduler Setting</summary>
generator_optimizer_params:
lr_fn: "PiecewiseConstantDecay"
lr_params:
boundaries: [100000, 200000, 300000, 400000, 500000, 600000, 700000]
values:
[
0.0005,
0.0005,
0.00025,
0.000125,
0.0000625,
0.00003125,
0.000015625,
0.000001,
]
amsgrad: false
discriminator_optimizer_params:
lr_fn: "PiecewiseConstantDecay"
lr_params:
boundaries: [100000, 200000, 300000, 400000, 500000]
values: [0.00025, 0.000125, 0.0000625, 0.00003125, 0.000015625, 0.000001]
amsgrad: false
gradient_accumulation_steps: 1
</details>
<details>
<summary>Interval Setting</summary>
discriminator_train_start_steps: 200000 # steps begin training discriminator
train_max_steps: 1000000 # Number of training steps.
save_interval_steps: 20000 # Interval steps to save checkpoint.
eval_interval_steps: 5000 # Interval steps to evaluate the network.
log_interval_steps: 200 # Interval steps to record the training log.
</details>
<details>
<summary>Other Setting</summary>
num_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.
</details>
## How to Use
```py
import soundfile as sf
import tensorflow as tf
from tensorflow_tts.inference import TFAutoModel, AutoProcessor
lightspeech = TFAutoModel.from_pretrained("bookbot/lightspeech-mfa-sw-v1")
processor = AutoProcessor.from_pretrained("bookbot/lightspeech-mfa-sw-v1")
mb_melgan = TFAutoModel.from_pretrained("bookbot/mb-melgan-hifi-postnets-sw-v1")
text, speaker_name = "Hello World.", "sw-KE-OpenBible"
input_ids = processor.text_to_sequence(text)
mel, _, _ = lightspeech.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor(
[processor.speakers_map[speaker_name]], dtype=tf.int32
),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
)
audio = mb_melgan.inference(mel)[0, :, 0]
sf.write("./audio.wav", audio, 44100, "PCM_16")
```
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
MB-MelGAN HiFi PostNets SW v1 was trained and evaluated by [David Samuel Setiawan](https://davidsamuell.github.io/), [Wilson Wongso](https://wilsonwongso.dev/). All computation and development are done on Scaleway.
## Framework versions
- TensorFlowTTS 1.8
- TensorFlow 2.7.0
|
{"language": "sw", "license": "cc-by-sa-4.0", "tags": ["tensorflowtts", "audio", "text-to-speech", "mel-to-wav"], "datasets": ["bookbot/OpenBible_Swahili"], "inference": false}
|
bookbot/mb-melgan-hifi-postnets-sw-v1
| null |
[
"tensorflowtts",
"tflite",
"tensorboard",
"onnx",
"audio",
"text-to-speech",
"mel-to-wav",
"sw",
"dataset:bookbot/OpenBible_Swahili",
"arxiv:2005.05106",
"arxiv:2010.05646",
"license:cc-by-sa-4.0",
"region:us"
] | null |
2024-04-15T02:45:35+00:00
|
[
"2005.05106",
"2010.05646"
] |
[
"sw"
] |
TAGS
#tensorflowtts #tflite #tensorboard #onnx #audio #text-to-speech #mel-to-wav #sw #dataset-bookbot/OpenBible_Swahili #arxiv-2005.05106 #arxiv-2010.05646 #license-cc-by-sa-4.0 #region-us
|
MB-MelGAN HiFi PostNets SW v1
=============================
MB-MelGAN HiFi PostNets SW v1 is a mel-to-wav model based on the MB-MelGAN architecture with HiFi-GAN discriminator. This model was trained from scratch on a synthetic audio dataset. Instead of training on ground truth waveform spectrograms, this model was trained on the generated PostNet spectrograms of LightSpeech MFA SW v1. The list of real speakers include:
* sw-KE-OpenBible
This model was trained using the TensorFlowTTS framework. All training was done on a Scaleway RENDER-S VM with a Tesla P100 GPU. All necessary scripts used for training could be found in this Github Fork, as well as the Training metrics logged via Tensorboard.
Model
-----
Training Procedure
------------------
Feature Extraction Setting
```
sampling_rate: 44100
hop_size: 512 # Hop size.
format: "npy"
```
Generator Network Architecture Setting
```
model_type: "multiband_melgan_generator"
multiband_melgan_generator_params:
out_channels: 4 # Number of output channels (number of subbands).
kernel_size: 7 # Kernel size of initial and final conv layers.
filters: 384 # Initial number of channels for conv layers.
upsample_scales: [8, 4, 4] # List of Upsampling scales.
stack_kernel_size: 3 # Kernel size of dilated conv layers in residual stack.
stacks: 4 # Number of stacks in a single residual stack module.
is_weight_norm: false # Use weight-norm or not.
```
Discriminator Network Architecture Setting
```
multiband_melgan_discriminator_params:
out_channels: 1 # Number of output channels.
scales: 3 # Number of multi-scales.
downsample_pooling: "AveragePooling1D" # Pooling type for the input downsampling.
downsample_pooling_params: # Parameters of the above pooling function.
pool_size: 4
strides: 2
kernel_sizes: [5, 3] # List of kernel size.
filters: 16 # Number of channels of the initial conv layer.
max_downsample_filters: 512 # Maximum number of channels of downsampling layers.
downsample_scales: [4, 4, 4] # List of downsampling scales.
nonlinear_activation: "LeakyReLU" # Nonlinear activation function.
nonlinear_activation_params: # Parameters of nonlinear activation function.
alpha: 0.2
is_weight_norm: false # Use weight-norm or not.
hifigan_discriminator_params:
out_channels: 1 # Number of output channels (number of subbands).
period_scales: [3, 5, 7, 11, 17, 23, 37] # List of period scales.
n_layers: 5 # Number of layer of each period discriminator.
kernel_size: 5 # Kernel size.
strides: 3 # Strides
filters: 8 # In Conv filters of each period discriminator
filter_scales: 4 # Filter scales.
max_filters: 512 # maximum filters of period discriminator's conv.
is_weight_norm: false # Use weight-norm or not.
```
STFT Loss Setting
```
stft_loss_params:
fft_lengths: [1024, 2048, 512] # List of FFT size for STFT-based loss.
frame_steps: [120, 240, 50] # List of hop size for STFT-based loss
frame_lengths: [600, 1200, 240] # List of window length for STFT-based loss.
subband_stft_loss_params:
fft_lengths: [384, 683, 171] # List of FFT size for STFT-based loss.
frame_steps: [30, 60, 10] # List of hop size for STFT-based loss
frame_lengths: [150, 300, 60] # List of window length for STFT-based loss.
```
Adversarial Loss Setting
```
lambda_feat_match: 10.0 # Loss balancing coefficient for feature matching loss
lambda_adv: 2.5 # Loss balancing coefficient for adversarial loss.
```
Data Loader Setting
```
batch_size: 32 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.
eval_batch_size: 16
batch_max_steps: 8192 # Length of each audio in batch for training. Make sure dividable by hop_size.
batch_max_steps_valid: 8192 # Length of each audio for validation. Make sure dividable by hope_size.
remove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.
allow_cache: false # Whether to allow cache in dataset. If true, it requires cpu memory.
is_shuffle: false # shuffle dataset after each epoch.
```
Optimizer & Scheduler Setting
```
generator_optimizer_params:
lr_fn: "PiecewiseConstantDecay"
lr_params:
boundaries: [100000, 200000, 300000, 400000, 500000, 600000, 700000]
values:
[
0.0005,
0.0005,
0.00025,
0.000125,
0.0000625,
0.00003125,
0.000015625,
0.000001,
]
amsgrad: false
discriminator_optimizer_params:
lr_fn: "PiecewiseConstantDecay"
lr_params:
boundaries: [100000, 200000, 300000, 400000, 500000]
values: [0.00025, 0.000125, 0.0000625, 0.00003125, 0.000015625, 0.000001]
amsgrad: false
gradient_accumulation_steps: 1
```
Interval Setting
```
discriminator_train_start_steps: 200000 # steps begin training discriminator
train_max_steps: 1000000 # Number of training steps.
save_interval_steps: 20000 # Interval steps to save checkpoint.
eval_interval_steps: 5000 # Interval steps to evaluate the network.
log_interval_steps: 200 # Interval steps to record the training log.
```
Other Setting
```
num_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.
```
How to Use
----------
Disclaimer
----------
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
Authors
-------
MB-MelGAN HiFi PostNets SW v1 was trained and evaluated by David Samuel Setiawan, Wilson Wongso. All computation and development are done on Scaleway.
Framework versions
------------------
* TensorFlowTTS 1.8
* TensorFlow 2.7.0
|
[
"# Hop size.\nformat: \"npy\"\n\n```\n\n\n\nGenerator Network Architecture Setting\n\n```\nmodel_type: \"multiband_melgan_generator\"\n\nmultiband_melgan_generator_params:\n out_channels: 4 # Number of output channels (number of subbands).\n kernel_size: 7 # Kernel size of initial and final conv layers.\n filters: 384 # Initial number of channels for conv layers.\n upsample_scales: [8, 4, 4] # List of Upsampling scales.\n stack_kernel_size: 3 # Kernel size of dilated conv layers in residual stack.\n stacks: 4 # Number of stacks in a single residual stack module.\n is_weight_norm: false # Use weight-norm or not.\n\n```\n\n\n\nDiscriminator Network Architecture Setting\n\n```\nmultiband_melgan_discriminator_params:\n out_channels: 1 # Number of output channels.\n scales: 3 # Number of multi-scales.\n downsample_pooling: \"AveragePooling1D\" # Pooling type for the input downsampling.\n downsample_pooling_params: # Parameters of the above pooling function.\n pool_size: 4\n strides: 2\n kernel_sizes: [5, 3] # List of kernel size.\n filters: 16 # Number of channels of the initial conv layer.\n max_downsample_filters: 512 # Maximum number of channels of downsampling layers.\n downsample_scales: [4, 4, 4] # List of downsampling scales.\n nonlinear_activation: \"LeakyReLU\" # Nonlinear activation function.\n nonlinear_activation_params: # Parameters of nonlinear activation function.\n alpha: 0.2\n is_weight_norm: false # Use weight-norm or not.\n\nhifigan_discriminator_params:\n out_channels: 1 # Number of output channels (number of subbands).\n period_scales: [3, 5, 7, 11, 17, 23, 37] # List of period scales.\n n_layers: 5 # Number of layer of each period discriminator.\n kernel_size: 5 # Kernel size.\n strides: 3 # Strides\n filters: 8 # In Conv filters of each period discriminator\n filter_scales: 4 # Filter scales.\n max_filters: 512 # maximum filters of period discriminator's conv.\n is_weight_norm: false # Use weight-norm or not.\n\n```\n\n\n\nSTFT Loss Setting\n\n```\nstft_loss_params:\n fft_lengths: [1024, 2048, 512] # List of FFT size for STFT-based loss.\n frame_steps: [120, 240, 50] # List of hop size for STFT-based loss\n frame_lengths: [600, 1200, 240] # List of window length for STFT-based loss.\n\nsubband_stft_loss_params:\n fft_lengths: [384, 683, 171] # List of FFT size for STFT-based loss.\n frame_steps: [30, 60, 10] # List of hop size for STFT-based loss\n frame_lengths: [150, 300, 60] # List of window length for STFT-based loss.\n\n```\n\n\n\nAdversarial Loss Setting\n\n```\nlambda_feat_match: 10.0 # Loss balancing coefficient for feature matching loss\nlambda_adv: 2.5 # Loss balancing coefficient for adversarial loss.\n\n```\n\n\n\nData Loader Setting\n\n```\nbatch_size: 32 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.\neval_batch_size: 16\nbatch_max_steps: 8192 # Length of each audio in batch for training. Make sure dividable by hop_size.\nbatch_max_steps_valid: 8192 # Length of each audio for validation. Make sure dividable by hope_size.\nremove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.\nallow_cache: false # Whether to allow cache in dataset. If true, it requires cpu memory.\nis_shuffle: false # shuffle dataset after each epoch.\n\n```\n\n\n\nOptimizer & Scheduler Setting\n\n```\ngenerator_optimizer_params:\n lr_fn: \"PiecewiseConstantDecay\"\n lr_params:\n boundaries: [100000, 200000, 300000, 400000, 500000, 600000, 700000]\n values:\n [\n 0.0005,\n 0.0005,\n 0.00025,\n 0.000125,\n 0.0000625,\n 0.00003125,\n 0.000015625,\n 0.000001,\n ]\n amsgrad: false\n\ndiscriminator_optimizer_params:\n lr_fn: \"PiecewiseConstantDecay\"\n lr_params:\n boundaries: [100000, 200000, 300000, 400000, 500000]\n values: [0.00025, 0.000125, 0.0000625, 0.00003125, 0.000015625, 0.000001]\n amsgrad: false\n\ngradient_accumulation_steps: 1\n\n```\n\n\n\nInterval Setting\n\n```\ndiscriminator_train_start_steps: 200000 # steps begin training discriminator\ntrain_max_steps: 1000000 # Number of training steps.\nsave_interval_steps: 20000 # Interval steps to save checkpoint.\neval_interval_steps: 5000 # Interval steps to evaluate the network.\nlog_interval_steps: 200 # Interval steps to record the training log.\n\n```\n\n\n\nOther Setting\n\n```\nnum_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.\n\n```\n\n\nHow to Use\n----------\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nMB-MelGAN HiFi PostNets SW v1 was trained and evaluated by David Samuel Setiawan, Wilson Wongso. All computation and development are done on Scaleway.\n\n\nFramework versions\n------------------\n\n\n* TensorFlowTTS 1.8\n* TensorFlow 2.7.0"
] |
[
"TAGS\n#tensorflowtts #tflite #tensorboard #onnx #audio #text-to-speech #mel-to-wav #sw #dataset-bookbot/OpenBible_Swahili #arxiv-2005.05106 #arxiv-2010.05646 #license-cc-by-sa-4.0 #region-us \n",
"# Hop size.\nformat: \"npy\"\n\n```\n\n\n\nGenerator Network Architecture Setting\n\n```\nmodel_type: \"multiband_melgan_generator\"\n\nmultiband_melgan_generator_params:\n out_channels: 4 # Number of output channels (number of subbands).\n kernel_size: 7 # Kernel size of initial and final conv layers.\n filters: 384 # Initial number of channels for conv layers.\n upsample_scales: [8, 4, 4] # List of Upsampling scales.\n stack_kernel_size: 3 # Kernel size of dilated conv layers in residual stack.\n stacks: 4 # Number of stacks in a single residual stack module.\n is_weight_norm: false # Use weight-norm or not.\n\n```\n\n\n\nDiscriminator Network Architecture Setting\n\n```\nmultiband_melgan_discriminator_params:\n out_channels: 1 # Number of output channels.\n scales: 3 # Number of multi-scales.\n downsample_pooling: \"AveragePooling1D\" # Pooling type for the input downsampling.\n downsample_pooling_params: # Parameters of the above pooling function.\n pool_size: 4\n strides: 2\n kernel_sizes: [5, 3] # List of kernel size.\n filters: 16 # Number of channels of the initial conv layer.\n max_downsample_filters: 512 # Maximum number of channels of downsampling layers.\n downsample_scales: [4, 4, 4] # List of downsampling scales.\n nonlinear_activation: \"LeakyReLU\" # Nonlinear activation function.\n nonlinear_activation_params: # Parameters of nonlinear activation function.\n alpha: 0.2\n is_weight_norm: false # Use weight-norm or not.\n\nhifigan_discriminator_params:\n out_channels: 1 # Number of output channels (number of subbands).\n period_scales: [3, 5, 7, 11, 17, 23, 37] # List of period scales.\n n_layers: 5 # Number of layer of each period discriminator.\n kernel_size: 5 # Kernel size.\n strides: 3 # Strides\n filters: 8 # In Conv filters of each period discriminator\n filter_scales: 4 # Filter scales.\n max_filters: 512 # maximum filters of period discriminator's conv.\n is_weight_norm: false # Use weight-norm or not.\n\n```\n\n\n\nSTFT Loss Setting\n\n```\nstft_loss_params:\n fft_lengths: [1024, 2048, 512] # List of FFT size for STFT-based loss.\n frame_steps: [120, 240, 50] # List of hop size for STFT-based loss\n frame_lengths: [600, 1200, 240] # List of window length for STFT-based loss.\n\nsubband_stft_loss_params:\n fft_lengths: [384, 683, 171] # List of FFT size for STFT-based loss.\n frame_steps: [30, 60, 10] # List of hop size for STFT-based loss\n frame_lengths: [150, 300, 60] # List of window length for STFT-based loss.\n\n```\n\n\n\nAdversarial Loss Setting\n\n```\nlambda_feat_match: 10.0 # Loss balancing coefficient for feature matching loss\nlambda_adv: 2.5 # Loss balancing coefficient for adversarial loss.\n\n```\n\n\n\nData Loader Setting\n\n```\nbatch_size: 32 # Batch size for each GPU with assuming that gradient_accumulation_steps == 1.\neval_batch_size: 16\nbatch_max_steps: 8192 # Length of each audio in batch for training. Make sure dividable by hop_size.\nbatch_max_steps_valid: 8192 # Length of each audio for validation. Make sure dividable by hope_size.\nremove_short_samples: true # Whether to remove samples the length of which are less than batch_max_steps.\nallow_cache: false # Whether to allow cache in dataset. If true, it requires cpu memory.\nis_shuffle: false # shuffle dataset after each epoch.\n\n```\n\n\n\nOptimizer & Scheduler Setting\n\n```\ngenerator_optimizer_params:\n lr_fn: \"PiecewiseConstantDecay\"\n lr_params:\n boundaries: [100000, 200000, 300000, 400000, 500000, 600000, 700000]\n values:\n [\n 0.0005,\n 0.0005,\n 0.00025,\n 0.000125,\n 0.0000625,\n 0.00003125,\n 0.000015625,\n 0.000001,\n ]\n amsgrad: false\n\ndiscriminator_optimizer_params:\n lr_fn: \"PiecewiseConstantDecay\"\n lr_params:\n boundaries: [100000, 200000, 300000, 400000, 500000]\n values: [0.00025, 0.000125, 0.0000625, 0.00003125, 0.000015625, 0.000001]\n amsgrad: false\n\ngradient_accumulation_steps: 1\n\n```\n\n\n\nInterval Setting\n\n```\ndiscriminator_train_start_steps: 200000 # steps begin training discriminator\ntrain_max_steps: 1000000 # Number of training steps.\nsave_interval_steps: 20000 # Interval steps to save checkpoint.\neval_interval_steps: 5000 # Interval steps to evaluate the network.\nlog_interval_steps: 200 # Interval steps to record the training log.\n\n```\n\n\n\nOther Setting\n\n```\nnum_save_intermediate_results: 1 # Number of batch to be saved as intermediate results.\n\n```\n\n\nHow to Use\n----------\n\n\nDisclaimer\n----------\n\n\nDo consider the biases which came from pre-training datasets that may be carried over into the results of this model.\n\n\nAuthors\n-------\n\n\nMB-MelGAN HiFi PostNets SW v1 was trained and evaluated by David Samuel Setiawan, Wilson Wongso. All computation and development are done on Scaleway.\n\n\nFramework versions\n------------------\n\n\n* TensorFlowTTS 1.8\n* TensorFlow 2.7.0"
] |
text-generation
|
transformers
|

# NeuralStar_AlphaWriter_4x7b
I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in [NovelCrafter](https://www.novelcrafter.com).
Inspired by his [LLM Course](https://github.com/mlabonne/llm-course) and fueled by his [LazyMergeKit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb).
I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.
I present NeuralStar-AlphaWriter-4x7b:
NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [OmnicromsBrain/NeuralStar-7b-Lazy](https://huggingface.co/OmnicromsBrain/NeuralStar-7b-Lazy)
## ⚡ Quantized Models
Thanks to MRadermacher for the quantized models
**.GGUF** https://huggingface.co/mradermacher/NeuralStar_AlphaWriter_4x7b-GGUF
Q4_K_M and Q5_K_M .gguf [**Here**](https://huggingface.co/OmnicromsBrain/NeuralStar_AlphaWriter_4x7b-GGUF) created with [mlabonne/Autogguf](https://colab.research.google.com/drive/1P646NEg33BZy4BfLDNpTz0V0lwIU3CHu)
## 🧩 Configuration
```yaml
base_model: mlabonne/AlphaMonarch-7B
experts:
- source_model: mlabonne/AlphaMonarch-7B
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
positive_prompts:
- "edit"
- "rewrite"
- "evaluate"
- "spelling"
- "grammer"
- source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "prose"
- "character"
- source_model: OmnicromsBrain/NeuralStar-7b-Lazy
positive_prompts:
- "codex"
- "plot"
- "outline"
- "scenebeat"
- "count"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "OmnicromsBrain/NeuralStar_AlphaWriter_4x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "mlabonne/AlphaMonarch-7B", "FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "OmnicromsBrain/NeuralStar-7b-Lazy"], "base_model": ["mlabonne/AlphaMonarch-7B", "FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "OmnicromsBrain/NeuralStar-7b-Lazy"]}
|
OmnicromsBrain/NeuralStar_AlphaWriter_4x7b
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/AlphaMonarch-7B",
"FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"OmnicromsBrain/NeuralStar-7b-Lazy",
"conversational",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:OmnicromsBrain/NeuralStar-7b-Lazy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T02:46:54+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #mlabonne/AlphaMonarch-7B #FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B #SanjiWatsuki/Kunoichi-DPO-v2-7B #OmnicromsBrain/NeuralStar-7b-Lazy #conversational #base_model-mlabonne/AlphaMonarch-7B #base_model-FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-OmnicromsBrain/NeuralStar-7b-Lazy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!image/png
# NeuralStar_AlphaWriter_4x7b
I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.
Inspired by his LLM Course and fueled by his LazyMergeKit.
I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.
I present NeuralStar-AlphaWriter-4x7b:
NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* mlabonne/AlphaMonarch-7B
* FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
* SanjiWatsuki/Kunoichi-DPO-v2-7B
* OmnicromsBrain/NeuralStar-7b-Lazy
## ⚡ Quantized Models
Thanks to MRadermacher for the quantized models
.GGUF URL
Q4_K_M and Q5_K_M .gguf Here created with mlabonne/Autogguf
## Configuration
## Usage
|
[
"# NeuralStar_AlphaWriter_4x7b\n\nI was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.\n\nInspired by his LLM Course and fueled by his LazyMergeKit.\nI couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.\n\nI present NeuralStar-AlphaWriter-4x7b: \n\n\nNeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* mlabonne/AlphaMonarch-7B\n* FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B\n* SanjiWatsuki/Kunoichi-DPO-v2-7B\n* OmnicromsBrain/NeuralStar-7b-Lazy",
"## ⚡ Quantized Models\n\nThanks to MRadermacher for the quantized models\n\n.GGUF URL\n\nQ4_K_M and Q5_K_M .gguf Here created with mlabonne/Autogguf",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #mlabonne/AlphaMonarch-7B #FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B #SanjiWatsuki/Kunoichi-DPO-v2-7B #OmnicromsBrain/NeuralStar-7b-Lazy #conversational #base_model-mlabonne/AlphaMonarch-7B #base_model-FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B #base_model-SanjiWatsuki/Kunoichi-DPO-v2-7B #base_model-OmnicromsBrain/NeuralStar-7b-Lazy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NeuralStar_AlphaWriter_4x7b\n\nI was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.\n\nInspired by his LLM Course and fueled by his LazyMergeKit.\nI couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.\n\nI present NeuralStar-AlphaWriter-4x7b: \n\n\nNeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* mlabonne/AlphaMonarch-7B\n* FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B\n* SanjiWatsuki/Kunoichi-DPO-v2-7B\n* OmnicromsBrain/NeuralStar-7b-Lazy",
"## ⚡ Quantized Models\n\nThanks to MRadermacher for the quantized models\n\n.GGUF URL\n\nQ4_K_M and Q5_K_M .gguf Here created with mlabonne/Autogguf",
"## Configuration",
"## Usage"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLama_music_generator
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.04
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["trl", "sft", "missing lyric Llama2", "generated_from_trainer", "missing lyric Llama2 1"], "datasets": ["generator"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "LLama_music_generator", "results": []}]}
|
ShushantLLM/LLama_music_generator
| null |
[
"tensorboard",
"safetensors",
"trl",
"sft",
"missing lyric Llama2",
"generated_from_trainer",
"missing lyric Llama2 1",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null |
2024-04-15T02:47:21+00:00
|
[] |
[] |
TAGS
#tensorboard #safetensors #trl #sft #missing lyric Llama2 #generated_from_trainer #missing lyric Llama2 1 #dataset-generator #base_model-meta-llama/Llama-2-7b-hf #region-us
|
# LLama_music_generator
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.04
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# LLama_music_generator\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant_with_warmup\n- lr_scheduler_warmup_ratio: 0.04\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#tensorboard #safetensors #trl #sft #missing lyric Llama2 #generated_from_trainer #missing lyric Llama2 1 #dataset-generator #base_model-meta-llama/Llama-2-7b-hf #region-us \n",
"# LLama_music_generator\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant_with_warmup\n- lr_scheduler_warmup_ratio: 0.04\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
Model quantized using a modified [EETQ](https://github.com/NetEase-FuXi/EETQ) repo. Currently working on
decoupling its kernels from CUTLASS to make this a bit easier to use.
8bits.
|
{}
|
alpindale/Mistral-7B-Instruct-v0.2-EETQ
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null |
2024-04-15T02:48:37+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
Model quantized using a modified EETQ repo. Currently working on
decoupling its kernels from CUTLASS to make this a bit easier to use.
8bits.
|
[] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n"
] |
text-generation
|
transformers
|
# Multi_verse_modelExperiment26-7B
Multi_verse_modelExperiment26-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MTSAIR/multi_verse_model
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: MTSAIR/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Multi_verse_modelExperiment26-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["MTSAIR/multi_verse_model", "yam-peleg/Experiment26-7B"]}
|
automerger/Multi_verse_modelExperiment26-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"conversational",
"base_model:MTSAIR/multi_verse_model",
"base_model:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T02:49:52+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #conversational #base_model-MTSAIR/multi_verse_model #base_model-yam-peleg/Experiment26-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Multi_verse_modelExperiment26-7B
Multi_verse_modelExperiment26-7B is an automated merge created by Maxime Labonne using the following configuration.
* MTSAIR/multi_verse_model
* yam-peleg/Experiment26-7B
## Configuration
## Usage
|
[
"# Multi_verse_modelExperiment26-7B\n\nMulti_verse_modelExperiment26-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MTSAIR/multi_verse_model\n* yam-peleg/Experiment26-7B",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #conversational #base_model-MTSAIR/multi_verse_model #base_model-yam-peleg/Experiment26-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Multi_verse_modelExperiment26-7B\n\nMulti_verse_modelExperiment26-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MTSAIR/multi_verse_model\n* yam-peleg/Experiment26-7B",
"## Configuration",
"## Usage"
] |
unconditional-image-generation
|
diffusers
|
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Yellow514/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
|
Yellow514/sd-class-butterflies-32
| null |
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null |
2024-04-15T02:51:34+00:00
|
[] |
[] |
TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
|
[
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
[
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemini-all-data20240415_025230
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemini-all-data20240415_025230", "results": []}]}
|
mooo16/gemini-all-data20240415_025230
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-15T02:52:48+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
|
# gemini-all-data20240415_025230
This model is a fine-tuned version of google/gemma-2b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# gemini-all-data20240415_025230\n\nThis model is a fine-tuned version of google/gemma-2b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n",
"# gemini-all-data20240415_025230\n\nThis model is a fine-tuned version of google/gemma-2b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dangerous-heartbeat-MIT
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1104 | 1.0 | 18 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "bsd-3-clause", "tags": ["generated_from_trainer"], "datasets": ["audiofolder"], "metrics": ["accuracy"], "base_model": "MIT/ast-finetuned-audioset-10-10-0.4593", "model-index": [{"name": "dangerous-heartbeat-MIT", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "audiofolder", "type": "audiofolder", "config": "default", "split": "train[:90]", "args": "default"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
|
Hemg/dangerous-heartbeat-MIT
| null |
[
"transformers",
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:54:53+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #audio-spectrogram-transformer #audio-classification #generated_from_trainer #dataset-audiofolder #base_model-MIT/ast-finetuned-audioset-10-10-0.4593 #license-bsd-3-clause #model-index #endpoints_compatible #region-us
|
dangerous-heartbeat-MIT
=======================
This model is a fine-tuned version of MIT/ast-finetuned-audioset-10-10-0.4593 on the audiofolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0000
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #audio-spectrogram-transformer #audio-classification #generated_from_trainer #dataset-audiofolder #base_model-MIT/ast-finetuned-audioset-10-10-0.4593 #license-bsd-3-clause #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9069
- Mean Iou: 0.0635
- Mean Accuracy: 0.1208
- Overall Accuracy: 0.4617
- Per Category Iou: [0.41928714602092204, 0.5390214194468376, 0.9012448258150285, 0.4803948505715545, 0.286634627489022, 0.3955429610634541, 0.013755700264604875, 0.0, 0.15830383993025027, 0.0, 0.0, 0.05008873442525973, 0.5408022058058475, 0.0, 0.0, 0.015204209024433743, 0.0, 0.0, 0.011558359136104005, 0.005458675263774912, 0.0, 0.0, 0.02979897667822854, 0.04487737341772152, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.10603847090333576, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan]
- Per Category Accuracy: [0.8627759743647633, 0.5675831029534248, 0.9798871578394094, 0.9012946287138017, 0.9550220818632652, 0.5924650130435409, 0.05558715352822963, nan, 0.230359005236165, 0.0, 0.0, 0.059625456626028785, 0.9366147691642339, nan, 0.0, 0.02054093126920065, 0.0, 0.0, 0.0285415776226433, 0.00626893301918546, 0.0, nan, 0.03166877112948556, 0.05442682722060975, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.23869553302274596, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.9809 | 1.0 | 20 | 4.9704 | 0.0048 | 0.0277 | 0.0613 | [0.004911874635991639, 0.03406250112083096, 0.2670554804708638, 0.06771658036261965, 0.04212671608698573, 0.025069857985360353, 0.06205384643258325, 0.0, 0.0003841762088211126, 0.0, 0.005606722085999641, 0.0, 0.04597345406633831, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0442952626641651, 0.00582620230447109, 0.0, 0.0, 0.0493881593227697, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.002515785319652723, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.00015083714616119463, 0.001783657619275733, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0] | [0.004944155329196129, 0.037135807489725774, 0.8389822852590688, 0.07646954588656024, 0.04640892156620858, 0.025150252943804678, 0.12986766769044097, nan, 0.0003870676400701022, 0.0, 0.006472659486329743, 0.0, 0.046842901935344364, nan, 0.0, 0.0, 0.0, 0.0, 0.07175761029586361, 0.009424436216762033, 0.0, nan, 0.1482831656551668, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.013976431899150453, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0002481184351997353, 0.03814298169136879, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 4.5728 | 2.0 | 40 | 4.7881 | 0.0176 | 0.0770 | 0.2450 | [0.2507255484460695, 0.2977284139484906, 0.23862898391447698, 0.2739403815702492, 0.23710310856686415, 0.17133881711748083, 0.05086156794370356, 0.0, 0.011658808401200172, 0.0, 0.00037139629532195414, 0.0, 0.39436900130239655, 0.0, 0.0, 0.0, 0.0, 0.0, 0.14939800879833295, 0.004368882283605801, 0.000995317800768217, 0.0, 0.05839388398904878, 0.017299836033682573, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.028484399440190995, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.001430474029497361, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0] | [0.3330531771794789, 0.4474675154553311, 0.941542176194739, 0.478411220473498, 0.5273000912881498, 0.1957395903895767, 0.12277708262237895, nan, 0.013160299762383476, 0.0, 0.00041425020712510354, 0.0, 0.4283345947309619, nan, 0.0, 0.0, 0.0, 0.0, 0.24514413259248705, 0.014347021204981488, 0.06484018264840183, nan, 0.12553460144189646, 0.029863992899805705, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.18964099753357083, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.002398478206930775, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 4.2574 | 3.0 | 60 | 4.4704 | 0.0274 | 0.1005 | 0.3626 | [0.3635850414549168, 0.37010164682793634, 0.3855682341569739, 0.2844444257208637, 0.2643699799196787, 0.25330901424005625, 0.013256218650834272, 0.0, 0.007188262747096532, 0.0, 0.0, 0.001604540741145009, 0.5977800636317968, 0.0, 0.0, 0.0, 0.0, 0.0, 0.052051926298157455, 0.007228346456692914, 0.009877098144051858, 0.0, 0.03977203005058424, 0.011193484771017457, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.021203910943574038, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0] | [0.6268206197458863, 0.5427333549701451, 0.9762653679689233, 0.725110212585445, 0.8315610273617725, 0.436360491763112, 0.01964129981420392, nan, 0.008644510628232283, 0.0, 0.0, 0.0016174464152105981, 0.6754514001513677, nan, 0.0, 0.0, 0.0, 0.0, 0.059030251222871255, 0.01931167956916863, 0.3812785388127854, nan, 0.059406948800456195, 0.014704118592434454, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.04932858317347218, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.9015 | 4.0 | 80 | 4.1496 | 0.0405 | 0.1007 | 0.3998 | [0.3742520572877334, 0.3849514335203841, 0.5440203329288118, 0.35316902028787084, 0.21878976145989754, 0.22397186573235703, 0.013087026363035057, 0.0, 0.00026723412678774684, 0.0, 0.0, 0.0016120830493254136, 0.6405065062263887, 0.0, 0.0, 0.0008228460793804453, 0.0, 0.0, 0.0, 0.006625088524889773, 0.010328673134383416, 0.0, 0.007047043264689212, 0.001141891428962934, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.017049180327868854, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7474724286502987, 0.6091844355724827, 0.9756247792843474, 0.7534996858739952, 0.941846981322938, 0.3689952870299641, 0.042012664467447766, nan, 0.00029030073005257667, 0.0, 0.0, 0.001639452488886933, 0.8248999891880203, nan, 0.0, 0.0020480693934100355, 0.0, 0.0, 0.0, 0.012201279030629418, 0.1278538812785388, nan, 0.007596431917233514, 0.0011993571445705101, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.02137571937517128, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.8982 | 5.0 | 100 | 4.0155 | 0.0454 | 0.1041 | 0.4049 | [0.36889651396953876, 0.4398856429680584, 0.6636428229151176, 0.33495076670806645, 0.23050444953925636, 0.22171489032678143, 0.015416709650057885, 0.0, 0.002850158836976852, 0.0, 0.0, 0.012147050002142336, 0.5406326394547745, 0.0, 0.0, 0.00044155958846646355, 0.0, 0.0, 0.0027756749936916477, 0.006107094000390854, 0.019520958083832335, 0.0, 0.01822783018033164, 0.0022817025010969725, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.026120937885643767, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7159330433116354, 0.5926244540289438, 0.9895370514852624, 0.8862595378857441, 0.9004465717598875, 0.39261778821901616, 0.0509991278959542, nan, 0.003096541120560818, 0.0, 0.0, 0.012477443774481758, 0.85481313295131, nan, 0.0, 0.0006023733510029517, 0.0, 0.0, 0.003134349622453341, 0.010518343991921912, 0.14885844748858448, nan, 0.02097674229155635, 0.0024946628607066612, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0348040559057276, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.6283 | 6.0 | 120 | 3.7910 | 0.0497 | 0.0968 | 0.4168 | [0.3622566785407158, 0.46462399117235714, 0.8341980470183569, 0.25461895042570054, 0.23580142187752465, 0.22646461900454185, 0.001589825119236884, 0.0, 0.0011174951241368404, 0.0, 0.0, 0.006928185262440935, 0.6226330176253732, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.003520390379923318, 0.01504907306434024, 0.0, 0.00018081002892960464, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8423599899067896, 0.7079812463585636, 0.9520626134375796, 0.6176093131648329, 0.8641533640916829, 0.3481832725595607, 0.0018958783604443939, nan, 0.0011396991624286344, 0.0, 0.0, 0.0071629769816469345, 0.8475961365192634, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0042494109727364525, 0.031506849315068496, nan, 0.0001832919229359293, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.6586 | 7.0 | 140 | 3.7161 | 0.0517 | 0.1056 | 0.4282 | [0.3805749650435149, 0.47410873696071826, 0.7868364761246586, 0.3591007291385363, 0.21068110261898693, 0.32015089287057513, 0.015015015015015015, 0.0, 0.008980530210503629, 0.0, 0.0, 0.01552178616061583, 0.5707148768376984, nan, 0.0, 0.00027815122154744795, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.011042526754579342, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.744104863144035, 0.6136734223049461, 0.9818089238931367, 0.942522995634454, 0.8766623078631172, 0.6134283614141792, 0.050051188715732, nan, 0.009407894029481651, 0.0, 0.0, 0.01626248844681132, 0.8450373013298735, nan, 0.0, 0.000361424010601771, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.011221538837521894, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.276 | 8.0 | 160 | 3.7658 | 0.0507 | 0.1022 | 0.4112 | [0.38262195274147187, 0.46468131296139953, 0.8702971390699216, 0.2898497591171621, 0.2530257290830033, 0.2718530024125054, 0.00013599891200870393, 0.0, 0.01839955905196172, 0.0, 0.0, 0.02937338166752978, 0.4554050239909681, nan, 0.0, 0.0009861446674227108, 0.0, 0.0, 0.0, 0.0, 0.0028646455001193603, 0.0, 0.0018258564837538038, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7270773136123638, 0.572498308809441, 0.9705904092376173, 0.8448056446026708, 0.9016308504601416, 0.5335168557139357, 0.00015167026883555152, nan, 0.019740449643575214, 0.0, 0.0, 0.03120461247304256, 0.9013316754964501, nan, 0.0, 0.0012047467020059033, 0.0, 0.0, 0.0, 0.0, 0.010958904109589041, nan, 0.0018940165370046026, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.0029 | 9.0 | 180 | 3.5031 | 0.0568 | 0.1023 | 0.4367 | [0.38493961879342514, 0.49555597488702036, 0.8338808432118094, 0.3652995039545917, 0.2247265001335661, 0.29172534336221606, 0.010377174125464139, 0.0, 0.0016075458842566962, 0.0, 0.0, 0.03193341139981644, 0.5990139111244347, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.890923130337117, 0.6058724392637749, 0.9798132437604198, 0.7264150436823086, 0.8925020354790161, 0.5075666950117463, 0.040268456375838924, nan, 0.0016235337125162623, 0.0, 0.0, 0.03330619250913252, 0.8461635492125275, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.2719 | 10.0 | 200 | 3.5634 | 0.0516 | 0.1061 | 0.4188 | [0.393634129531262, 0.46001172064450074, 0.8329058314043613, 0.3956630096862876, 0.21145735909250005, 0.30099020521449976, 0.02093284907820457, 0.0, 0.02544115298626637, 0.0, 0.0, 0.029197533418863553, 0.46378429073856975, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0023581070092698, 0.007280334728033473, 0.0, 0.0020021032195437632, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0005379236148466917, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7509082940766895, 0.53407602341468, 0.9840509842891518, 0.8269085168419865, 0.9092546445930275, 0.6470749318997449, 0.08042316005005119, nan, 0.027406539292741408, 0.0, 0.0, 0.031050569957308215, 0.8911053447219519, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0024402558061258836, 0.03972602739726028, nan, 0.0020162111522952224, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0005480953685941354, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.9733 | 11.0 | 220 | 3.3877 | 0.0576 | 0.1006 | 0.4344 | [0.3704739367603994, 0.546717691028415, 0.8290715302270899, 0.30168890507742824, 0.26128633731747614, 0.33708007243595245, 0.007301795621308835, 0.0, 0.011698637989787578, 0.0, 0.0, 0.04572016417261456, 0.5720039444451045, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.877993576013068, 0.7172915925345179, 0.8803084680896496, 0.6048617040127583, 0.8662011793442057, 0.537898332444547, 0.02320555113183938, nan, 0.011848570537701464, 0.0, 0.0, 0.04743409180933938, 0.8675712689660143, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 3.1741 | 12.0 | 240 | 3.3514 | 0.0533 | 0.1029 | 0.4234 | [0.3915177527205492, 0.44147329603496643, 0.8400667315445572, 0.3844704389328772, 0.2086454491298782, 0.30382334966258917, 0.016062618964378664, 0.0, 0.019611966980388033, 0.0, 0.0, 0.038059168354814404, 0.5015016296549186, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4.355969856688592e-05, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8002549863684743, 0.5458302865834822, 0.9925182526711727, 0.7483125795383153, 0.8122918260097209, 0.6842670394765288, 0.06271565616350055, nan, 0.020052254131409465, 0.0, 0.0, 0.04279081026363276, 0.8470555375355895, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 4.797428578282041e-05, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.5563 | 13.0 | 260 | 3.3722 | 0.0557 | 0.1021 | 0.4226 | [0.37782496004790334, 0.4767717932490223, 0.8499507186567111, 0.33066615515304304, 0.23375701658171494, 0.2564942942150449, 0.05963709556881931, 0.0, 0.017781066652571488, 0.0, 0.0, 0.03590774158667333, 0.5290044021239985, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.004528898686859005, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.9145890526611357, 0.5096133918830674, 0.9844205546840994, 0.5831628461426938, 0.8938836939626459, 0.43425622991222634, 0.22693663974519396, nan, 0.018084660294386445, 0.0, 0.0, 0.03873068967034902, 0.902998522362778, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.004533570006476528, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.8451 | 14.0 | 280 | 3.1796 | 0.0612 | 0.1097 | 0.4619 | [0.39156024730689276, 0.551308867538195, 0.8480320776617073, 0.44210512258960527, 0.22791422404434397, 0.3974314191791834, 0.0229637293786285, 0.0, 0.01439132893094509, 0.0, 0.0, 0.029355184210799964, 0.6230688202247191, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8804922147577156, 0.6843475914709252, 0.9900544500381889, 0.8888584607288798, 0.880560558584787, 0.5985471945577447, 0.09088840859970425, nan, 0.014504284623737998, 0.0, 0.0, 0.031061572994146386, 0.8633636068764191, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.7476 | 15.0 | 300 | 3.3989 | 0.0551 | 0.1123 | 0.4330 | [0.4080696099205872, 0.42335844008283396, 0.8643215338143736, 0.4204801102405771, 0.23651920916785849, 0.31306817021102734, 0.0862659996351171, 0.0, 0.05275076464591012, 0.0, 0.0, 0.036088607463968296, 0.5161180222310522, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.000778816199376947, 0.0, 0.0, 0.0026457379163146427, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8180414239015571, 0.4516632321486527, 0.9611294071269597, 0.8651191262464358, 0.950704399101922, 0.6626262917429342, 0.3406514238046487, nan, 0.05785586031158945, 0.0, 0.0, 0.03837859249152766, 0.9128464338487043, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0009132420091324201, nan, 0.0, 0.0028544700040778145, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.9164 | 16.0 | 320 | 3.3170 | 0.0550 | 0.1122 | 0.4321 | [0.38993464910187264, 0.4802651969437669, 0.8828752549311226, 0.380604088407443, 0.23143937318503097, 0.3454494725991831, 0.05810921943623034, 0.0, 0.05710090461961823, 0.0, 0.0, 0.013345543596814356, 0.4595161203989509, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00016201875367073738, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7439701605237845, 0.5574476904006851, 0.9848065504299336, 0.935778683463011, 0.9379487306012682, 0.6424196128734705, 0.2518484814014333, nan, 0.05979119851193996, 0.0, 0.0, 0.014270938779103032, 0.9281904350019822, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.00016292615372082603, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.8282 | 17.0 | 340 | 3.2867 | 0.0547 | 0.1109 | 0.4252 | [0.38590376232924806, 0.3864431946841881, 0.8754518615591893, 0.39757661591722726, 0.2198797270729733, 0.3720394694105572, 0.04818361453108375, 0.0, 0.036813854567551145, 0.0, 0.0, 0.0150271013672033, 0.5445358592692828, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7528244862802536, 0.4070666348631602, 0.9765692369603246, 0.9225209551578416, 0.9381954553304878, 0.8254543620195148, 0.20433776968869677, nan, 0.03908307976818949, 0.0, 0.0, 0.016350512741516658, 0.9064313259091072, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.3337 | 18.0 | 360 | 3.2886 | 0.0555 | 0.1069 | 0.4235 | [0.3927202651812333, 0.5005855354558084, 0.875262413459391, 0.3322465061922771, 0.26062742143779594, 0.3374254049445865, 0.013214185758266363, 0.0, 0.05392194970264703, 0.0, 0.0, 0.0767706528617817, 0.42972571219124805, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0014436958614051972, 0.0, 1.9893767282710327e-05, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7597398531931161, 0.5382169963673618, 0.9655971025681036, 0.8793111780530631, 0.9560089807801436, 0.5276724846143868, 0.05012702385014978, nan, 0.059662175965249926, 0.0, 0.0, 0.08142247260243828, 0.9507063826720006, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00410958904109589, nan, 2.0365769215103254e-05, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.8002 | 19.0 | 380 | 3.1449 | 0.0604 | 0.1085 | 0.4509 | [0.3868533812781752, 0.5416788002329644, 0.895083801416724, 0.3709154852296421, 0.25466858614327137, 0.39167420726897934, 0.023323072917151015, 0.0, 0.030309736844878647, 0.0, 0.0, 0.03917405644540122, 0.5682338817716277, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8617457839028472, 0.6982642323949996, 0.9723725598088089, 0.762273735307226, 0.9296587796994893, 0.5833057088912269, 0.0950972585598908, nan, 0.030922403690044835, 0.0, 0.0, 0.04364904713700982, 0.8827981403394961, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.4428 | 20.0 | 400 | 3.1723 | 0.0577 | 0.1103 | 0.4485 | [0.39619768427376073, 0.4825493958408961, 0.8844711121174383, 0.47874993207465144, 0.25480198643442975, 0.34537051964103394, 0.05053853332296446, 0.0, 0.03277448374565529, 0.0, 0.0, 0.061568581614166326, 0.5297843423853117, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8984152521310523, 0.5093905066533196, 0.9624270098469978, 0.8515445870988181, 0.94816313439096, 0.5557845581770751, 0.1971334319190081, nan, 0.03447052372402077, 0.0, 0.0, 0.06955019585405572, 0.9307312502252496, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.9851 | 21.0 | 420 | 3.0996 | 0.0611 | 0.1127 | 0.4559 | [0.4010135605343876, 0.5165926521339104, 0.864738873916527, 0.4698051152863791, 0.2666893117578613, 0.3927616715623885, 0.05379022178430997, 0.0, 0.04327767360180969, 0.0, 0.0, 0.022643586151691014, 0.5737018049650562, nan, 0.0, 0.0, 0.0, 0.0, 0.0006169031462060457, 8.12842918106076e-05, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0008219178082191781, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8636771538664395, 0.6081599455690807, 0.9750088286261015, 0.8652157804017591, 0.9104389232932817, 0.6426934551691337, 0.22356197626360294, nan, 0.04689969572182739, 0.0, 0.0, 0.024899872364772677, 0.9215680974519768, nan, 0.0, 0.0, 0.0, 0.0, 0.0009972930616896994, 8.41467519353753e-05, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.000822143052891203, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.8011 | 22.0 | 440 | 3.1982 | 0.0564 | 0.1127 | 0.4435 | [0.41858414374416514, 0.46021764897567474, 0.8761636192728706, 0.4231863513893916, 0.23095789208572695, 0.3728752458797991, 0.05068186631552777, 0.0, 0.07254557547025017, 0.0, 0.0, 0.027401637707209905, 0.48257889612249183, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.02354830077602355, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.828531153301068, 0.5025436288061563, 0.949221027734205, 0.9041137082440625, 0.9502602945893267, 0.6857587593502731, 0.20869828991771888, nan, 0.09147698560323417, 0.0, 0.0, 0.030192333083931166, 0.9109813673550293, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.024116196218141955, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.3434 | 23.0 | 460 | 3.0638 | 0.0577 | 0.1114 | 0.4544 | [0.40165049275831083, 0.507995590348313, 0.8918478788648435, 0.4444788273615635, 0.2746630071493393, 0.3883609710916766, 0.029570773263433815, 0.0, 0.030956636252469194, 0.0, 0.0, 0.019971600486265055, 0.5311679162683037, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8387097386200001, 0.6414636912140207, 0.9805031084976553, 0.8792628509754015, 0.918482149465841, 0.6285041004280587, 0.12319417586167672, nan, 0.035384433429741846, 0.0, 0.0, 0.02151093701861714, 0.9510848019605723, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.6275 | 24.0 | 480 | 3.0454 | 0.0611 | 0.1125 | 0.4545 | [0.3986234496092969, 0.49864655103868133, 0.8908842936165674, 0.4497267007783604, 0.2636574252297527, 0.32828489017084533, 0.046670676970061546, 0.0, 0.050056953139486625, 0.0, 0.0, 0.024963289280469897, 0.6233010812449474, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00933099858475857, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0832350860646074, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8965104432475094, 0.5640130290102723, 0.9602670762054154, 0.9127642551454913, 0.9293873824973478, 0.5331205050228442, 0.19121829143442157, nan, 0.06095240143215027, 0.0, 0.0, 0.028993002068570927, 0.889204238296032, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.00933099858475857, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0967388325568649, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.9599 | 25.0 | 500 | 3.1466 | 0.0563 | 0.1128 | 0.4372 | [0.41127177891600164, 0.46575695877726336, 0.8965847239667444, 0.4423164494655299, 0.25211535363299503, 0.3572440382074552, 0.03792146409826606, 0.0, 0.1007402371780346, 0.0, 0.0, 0.03661754479552664, 0.4982757797201127, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.006786781934232851, 0.00033407572383073496, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.042478354978354976, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.7889551440274566, 0.522388234788083, 0.9848640391580366, 0.902320236695287, 0.9652364856529569, 0.5942954325987634, 0.1569787282447958, nan, 0.14339780876708205, 0.0, 0.0, 0.042295673605915234, 0.939957472879951, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.006842898456274693, 0.0003598071433711531, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.043025486434639625, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.8633 | 26.0 | 520 | 3.0706 | 0.0611 | 0.1106 | 0.4518 | [0.38994658664496407, 0.49370947081172684, 0.8883858060440138, 0.4688081893981783, 0.24840472828076782, 0.37372282958505965, 0.054418988985444075, 0.0, 0.058632015951144804, 0.0, 0.0, 0.026662877232871234, 0.5962126530466547, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.003765258891526968, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.9153953711626354, 0.5320035818047447, 0.9700647980092475, 0.82331620406914, 0.937405936196985, 0.5648717985673725, 0.22723998028286505, nan, 0.06544668680851978, 0.0, 0.0, 0.027870692311077857, 0.9029354524813493, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.003765981433951402, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.6615 | 27.0 | 540 | 2.9819 | 0.0602 | 0.1133 | 0.4587 | [0.40176908670767325, 0.4935155275304932, 0.8989296853379607, 0.49002573182848846, 0.2697391610856539, 0.400059997620784, 0.019648038940372554, 0.0, 0.05516763318511505, 0.0, 0.0, 0.031787696019300364, 0.535792363881262, nan, 0.0, 0.0, 0.0, 0.0, 0.0012696332398173366, 0.0059145760558907585, 0.0, 0.0, 0.0, 0.009537173571802312, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.12119219811527504, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8701827212585399, 0.685223491321162, 0.9870486108259487, 0.8262426771053154, 0.9062692753694703, 0.5573987864462476, 0.07958897357145565, nan, 0.07016676164159687, 0.0, 0.0, 0.0362440033449232, 0.9318484881248423, nan, 0.0, 0.0, 0.0, 0.0, 0.0029443890392743506, 0.00626893301918546, 0.0, nan, 0.0, 0.009618844299455491, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.15154836941627844, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.0586 | 28.0 | 560 | 3.0440 | 0.0599 | 0.1164 | 0.4531 | [0.40171468113652053, 0.5233187718117763, 0.8926763287944506, 0.48027994753309416, 0.25681509076932174, 0.35874871731086166, 0.09419245517036572, 0.0, 0.09596786077137251, 0.0, 0.0, 0.01677077535201105, 0.5384269639431997, nan, 0.0, 0.0, 0.0, 0.0, 4.620324808834061e-05, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.05689063266307013, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8822035072008986, 0.5869428358039704, 0.9702044134917832, 0.8316821581798949, 0.9643729491006884, 0.49883977343153224, 0.4089030447806469, nan, 0.1330437493952068, 0.0, 0.0, 0.017901940935698253, 0.9282174649511659, nan, 0.0, 0.0, 0.0, 0.0, 9.498029158949519e-05, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0635790627569197, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.4707 | 29.0 | 580 | 3.0078 | 0.0574 | 0.1105 | 0.4423 | [0.39838843212467284, 0.5062870980536787, 0.887361235402445, 0.424692917590995, 0.2894622375861359, 0.33955829599387827, 0.023558551319259966, 0.0, 0.08507824202441279, 0.0, 0.0, 0.04338517547417167, 0.5450031166681508, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4.422528359463105e-05, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.018543046357615896, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8161593816580565, 0.5689986196756824, 0.9853650123600766, 0.8881765119663214, 0.9358762428758235, 0.5532118818730813, 0.09449057748454859, nan, 0.11480856279634866, 0.0, 0.0, 0.050613969455569736, 0.9374526975889286, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 4.797428578282041e-05, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.01918333790079474, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.3796 | 30.0 | 600 | 2.9648 | 0.0581 | 0.1108 | 0.4498 | [0.39855196661531195, 0.5209957159303226, 0.8909988549675079, 0.40551327004760684, 0.2897538960581059, 0.41507648713985656, 0.006226603209240204, 0.0, 0.08406138388476785, 0.0, 0.0, 0.03647697756788666, 0.5509980668795591, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.006512890094979647, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.798524721725256, 0.6476810160438263, 0.9841577490699145, 0.9166035729819417, 0.9124127211270385, 0.6038150555611606, 0.025101429492283774, nan, 0.11514187104196458, 0.0, 0.0, 0.042493728269002246, 0.929659062240963, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.006577144423129624, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.351 | 31.0 | 620 | 2.9798 | 0.0578 | 0.1136 | 0.4510 | [0.40868524828498876, 0.48146085385816695, 0.8996394367048899, 0.45394785596851267, 0.261096202871327, 0.3624850543654202, 0.044936767285147235, 0.0, 0.11868812518230015, 0.0, 0.0, 0.024970499790643676, 0.5388361076560721, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.008626887131560028, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0358604091456077, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.87358064228486, 0.5002834943711704, 0.9774397805573122, 0.8880905971615897, 0.960227973649799, 0.6051770606632748, 0.1845827171728662, nan, 0.14875224445471846, 0.0, 0.0, 0.028871968663351084, 0.917261325548708, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.008923217155604596, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.040833104960263086, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.0727 | 32.0 | 640 | 3.0076 | 0.0591 | 0.1192 | 0.4549 | [0.41325460886895865, 0.5059136124550379, 0.9010746108837814, 0.4417688948955193, 0.28983656206740954, 0.4465339651202351, 0.025903254702896387, 0.0, 0.10671502137715308, 0.0, 0.0, 0.034965806309287445, 0.46958300900700545, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001382865044943114, 0.0, 0.0, 0.0, 0.006372141161190819, 0.0258011471526731, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.11359377252414589, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.786779032683409, 0.6566746305775074, 0.9675435066481608, 0.912710558392534, 0.9350127063235548, 0.6526238415750256, 0.10525916657187274, nan, 0.16799810766931522, 0.0, 0.0, 0.04185555213238854, 0.9512199517064908, nan, 0.0, 0.0, 0.0, 0.0, 0.003134349622453341, 0.0, 0.0, nan, 0.006394851533542422, 0.03161505433087865, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.21594957522608935, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.509 | 33.0 | 660 | 2.9574 | 0.0598 | 0.1099 | 0.4501 | [0.39003503467218836, 0.5006520333992297, 0.8909112789193417, 0.41623088470611525, 0.40006458557588803, 0.3452544380923725, 0.009562726823215502, 0.0, 0.12358409694306426, 0.0, 0.0, 0.053615722190421676, 0.5600836742587256, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0013520311910451516, 0.0, 0.0, 0.0, 0.0, 0.01553952979526223, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.05997693194925029, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.9084325737496846, 0.5179187993915624, 0.9728078316073027, 0.8129097733460058, 0.9169771286176014, 0.5292146491215427, 0.03886550638911007, nan, 0.15965464965002635, 0.0, 0.0, 0.06183706703050042, 0.926352398457491, nan, 0.0, 0.0, 0.0, 0.0, 0.0020420762691741464, 0.0, 0.0, nan, 0.0, 0.016695051452421502, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0712523979172376, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.9774 | 34.0 | 680 | 2.9613 | 0.0605 | 0.1159 | 0.4472 | [0.4032043730706014, 0.49005445332208886, 0.8999721349871596, 0.4486175963944834, 0.28643575451944914, 0.35683293247440745, 0.01875600248573527, 0.0, 0.12956005243922983, 0.0, 0.0, 0.04747182975110512, 0.5348329340822733, 0.0, 0.0, 0.004999725289819241, 0.0, 0.0, 0.0013948371380299398, 0.0, 0.0, 0.0, 0.004140326782865106, 0.005348516218081435, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.12236286919831224, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.821456419907909, 0.527154850490934, 0.9814229281473026, 0.8894061676090447, 0.9585502454911056, 0.6382831529337157, 0.07553179388010466, nan, 0.19020073757889192, 0.0, 0.0, 0.05659962149553277, 0.933389195228313, nan, 0.0, 0.00548159749412686, 0.0, 0.0, 0.0033718003514270787, 0.0, 0.0, nan, 0.004174982689096167, 0.00594881143706973, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.16689503973691422, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.3159 | 35.0 | 700 | 2.9346 | 0.0624 | 0.1154 | 0.4541 | [0.3967801656985792, 0.5560759595507347, 0.8909327969006109, 0.4287167091629943, 0.27348066298342544, 0.35888412918189133, 0.022443983830577465, 0.0, 0.10658222852119624, 0.0, 0.0, 0.04496837157205681, 0.5648853693653783, nan, 0.0, 0.014322781525687597, 0.0, 0.0, 0.01697486791375125, 0.0, 0.0, 0.0, 0.0, 0.03473754095903026, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.09884929987522528, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8876238647509804, 0.6216386365680366, 0.9820799421827648, 0.7809655750116791, 0.952604179516913, 0.5174033985270167, 0.09073673833086869, nan, 0.12775382498091542, 0.0, 0.0, 0.053892874433343604, 0.9190813421270768, nan, 0.0, 0.016625504487681464, 0.0, 0.0, 0.045163128650804955, 0.0, 0.0, nan, 0.0, 0.038907145769867355, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.19539599890380926, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.3852 | 36.0 | 720 | 2.9270 | 0.0613 | 0.1200 | 0.4552 | [0.4254229381284313, 0.4888272651405433, 0.8959242922926509, 0.4472104320320191, 0.2932553977208365, 0.4111456139826328, 0.0281153517703475, 0.0, 0.1447234659226094, 0.0, 0.0, 0.050443709280573554, 0.5151018519885623, 0.0, 0.0, 0.009063160366283166, 0.0, 0.0, 0.0037032592110000516, 0.0, 0.0, 0.0, 0.01192418474429472, 0.034037584066777676, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.10296658986175115, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8373911707175475, 0.5107630104365031, 0.9858824109130031, 0.8927783236947662, 0.9580814685055883, 0.6936641540434977, 0.11360103135782808, nan, 0.21582246497575452, 0.0, 0.0, 0.06023062365212799, 0.9430028471546473, nan, 0.0, 0.011625805674356967, 0.0, 0.0, 0.01025787149166548, 0.0, 0.0, nan, 0.012056535375341126, 0.03775576291107966, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.1959440942724034, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 2.1402 | 37.0 | 740 | 2.9398 | 0.0620 | 0.1173 | 0.4606 | [0.4223035263402758, 0.5094182057849878, 0.8962940869435538, 0.4436519707569334, 0.28216223292074527, 0.3776633539548852, 0.008834390682700501, 0.0, 0.14524188213436176, 0.0, 0.0, 0.05276351292540691, 0.5956512513409329, 0.0, 0.0, 0.0030471787327939477, 0.0, 0.0, 0.001824386684123543, 0.0, 0.0, 0.0, 0.018751383216304852, 0.03871739650634123, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.11195133979944107, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8868953322696254, 0.5319253764609736, 0.9871800136330412, 0.9075127127062627, 0.9572672768991636, 0.5745427554300044, 0.03537709020589239, nan, 0.22189727654907695, 0.0, 0.0, 0.06670040931297037, 0.9105218582189065, nan, 0.0, 0.0034937654358171196, 0.0, 0.0, 0.004036662392553545, 0.0, 0.0, nan, 0.018980896908476232, 0.03881119719830171, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.18662647300630308, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.8155 | 38.0 | 760 | 2.9118 | 0.0626 | 0.1194 | 0.4625 | [0.4190383753229526, 0.5019843587800783, 0.9122145865661716, 0.4997097765584999, 0.273329225888235, 0.4021394049064297, 0.017948082693313515, 0.0, 0.1381760481255138, 0.0, 0.0, 0.03519455868396077, 0.5907354562921361, nan, 0.0, 0.0115396775213405, 0.0, 0.0, 0.002848335166122839, 0.0, 0.0, 0.0, 0.018416243654822334, 0.02954796549211349, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.09110437199898913, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8600515664396959, 0.5880689927542749, 0.9800596240237182, 0.8783285274739436, 0.9522587648960055, 0.6325829093582002, 0.0741667614605847, nan, 0.23313298999000076, 0.0, 0.0, 0.044067162536860174, 0.9363264497062745, nan, 0.0, 0.01319197638696464, 0.0, 0.0, 0.007503443035570119, 0.0, 0.0, nan, 0.01847175267809865, 0.030151838614502626, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.1975883803781858, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.5568 | 39.0 | 780 | 2.8854 | 0.0647 | 0.1194 | 0.4688 | [0.4224814606797645, 0.5596298858768196, 0.9102870174742768, 0.4723640639111197, 0.28282399132702396, 0.41134304392705584, 0.003113047274659985, 0.0, 0.13887857147862895, 0.0, 0.0, 0.05047816672681343, 0.5759129264465256, nan, 0.0, 0.0025584255842558425, 0.0, 0.0, 0.006596742493529744, 0.0, 0.0, 0.0, 0.05668434328387853, 0.042575986957028064, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.07587587587587588, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8833911669231076, 0.626252751850534, 0.980141750778151, 0.8931166132383975, 0.952604179516913, 0.568604701439834, 0.012436962044515224, nan, 0.21306998397970045, 0.0, 0.0, 0.06156199110954624, 0.9291725231556565, nan, 0.0, 0.0031323414252153485, 0.0, 0.0, 0.016099159424419432, 0.0, 0.0, nan, 0.05812390533990469, 0.043848497205497855, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.20772814469717732, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.7911 | 40.0 | 800 | 2.8534 | 0.0633 | 0.1166 | 0.4671 | [0.41145112706721, 0.5624394397627507, 0.900155724570629, 0.4748438051893156, 0.3094077553355194, 0.3880899550814978, 0.002816581917166027, 0.0, 0.13526457471172004, 0.0, 0.0, 0.050124997577566326, 0.5646437270951743, 0.0, 0.0, 0.0008599322171075927, 0.0, 0.0, 0.002325035227806482, 0.003937739143355994, 0.0, 0.0, 0.005711090698510324, 0.04057337220602527, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.07380789413643624, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.9049587639232233, 0.5992093439744738, 0.9826876801655675, 0.880691184604067, 0.947521650094989, 0.5684533675395991, 0.011299435028248588, nan, 0.1809863773694453, 0.0, 0.0, 0.05691870956383962, 0.9273164666450427, nan, 0.0, 0.0010240346967050177, 0.0, 0.0, 0.0047015244336800115, 0.004459777852574891, 0.0, nan, 0.005824609995519531, 0.044064381491520546, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.1773088517402028, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.3954 | 41.0 | 820 | 2.8746 | 0.0658 | 0.1222 | 0.4677 | [0.429931223109492, 0.5447907709945543, 0.9040346168923198, 0.4577431526371508, 0.2704986092082866, 0.4418497787989148, 0.01935283077590516, 0.0, 0.15359412363098304, 0.0, 0.0, 0.04569787855987685, 0.5513937282229965, nan, 0.0, 0.005778805120910384, 0.0, 0.0, 0.0002426863169042051, 0.0, 0.0, 0.0, 0.05380778050448566, 0.03783722253841776, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.09748731577675768, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8571013893342085, 0.5946030492263537, 0.9848804645089231, 0.9054776057691791, 0.9645209839382202, 0.6513627257397345, 0.07780684791263792, nan, 0.2500026879697227, 0.0, 0.0, 0.05814004665287619, 0.9296410422748406, nan, 0.0, 0.007830853563038372, 0.0, 0.0, 0.0005223916037422235, 0.0, 0.0, nan, 0.054478432650401205, 0.03986663148552376, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.22115648122773363, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 0.9732 | 42.0 | 840 | 2.8900 | 0.0644 | 0.1202 | 0.4688 | [0.4224535909948316, 0.5667352484899163, 0.9051175782969744, 0.4971466251049035, 0.2811067558885737, 0.4085123427486352, 0.01438068482100637, 0.0, 0.14828315492388425, 0.0, 0.0, 0.0393518100568746, 0.5720518449529892, 0.0, 0.0, 0.011198884758364312, 0.0, 0.0, 0.003992622768083339, 0.0, 0.0, 0.0, 0.034013197360527894, 0.027279091299144142, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.12596242685555897, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8888912077133376, 0.6042535886477123, 0.9792958452074932, 0.8906573019529509, 0.960795440527004, 0.5953259443956013, 0.0588101467409851, nan, 0.22087584805444752, 0.0, 0.0, 0.04796223757757141, 0.9297311421054528, nan, 0.0, 0.014517197759171135, 0.0, 0.0, 0.009355558721565274, 0.0, 0.0, nan, 0.034642173434890636, 0.028976468612823526, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.22417100575500137, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.1853 | 43.0 | 860 | 2.8886 | 0.0631 | 0.1178 | 0.4648 | [0.4164725374604488, 0.5243178591788601, 0.9001648667453187, 0.4662798686073323, 0.3077297389357923, 0.3811166220093875, 0.006105440584324375, 0.0, 0.14801813775310765, 0.0, 0.0, 0.044645641844166806, 0.5946926517645833, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01479727730097662, 0.0, 0.0, 0.0, 0.035294797242654136, 0.03421671346070442, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.1022539857064321, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.9032398826000262, 0.5502449782393631, 0.9775301199871882, 0.9001884756028803, 0.936172312550887, 0.5781099116498277, 0.024722253820194898, nan, 0.20356532304020128, 0.0, 0.0, 0.053551780291360415, 0.9259829891519804, nan, 0.0, 0.0, 0.0, 0.0, 0.03324310205632331, 0.0, 0.0, nan, 0.03733045497128427, 0.035956727194223895, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.20389147711701835, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.5388 | 44.0 | 880 | 2.9169 | 0.0629 | 0.1195 | 0.4655 | [0.42280357202959284, 0.5407059981507442, 0.897838057411085, 0.4544268542552529, 0.29147934224362587, 0.4043066713744686, 0.011111215736494694, 0.0, 0.15802679265769665, 0.0, 0.0, 0.052951951007419194, 0.5589473570284428, 0.0, 0.0, 0.000828137178487919, 0.0, 0.0, 0.01231741658394279, 0.00011003521126760564, 0.0, 0.0, 0.01925285678132726, 0.048688814965222274, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0885954381752701, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8818202687601857, 0.5670904092876666, 0.9802238775325838, 0.901530894426814, 0.943771434210851, 0.6023809866970296, 0.0447427293064877, nan, 0.2315954713086112, 0.0, 0.0, 0.06298138286166982, 0.930632140411576, nan, 0.0, 0.0010240346967050177, 0.0, 0.0, 0.028874008643206536, 0.00012622012790306295, 0.0, nan, 0.020141745753737117, 0.0540670200772386, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.20224719101123595, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.4788 | 45.0 | 900 | 2.8847 | 0.0640 | 0.1187 | 0.4650 | [0.4180026057388679, 0.53775172418872, 0.9016485390533218, 0.4698107243055281, 0.3024590098978516, 0.3888761941804215, 0.007401875519927399, 0.0, 0.15458886733323385, 0.0, 0.0, 0.05495073757552665, 0.5560196942769547, 0.0, 0.0, 0.004654697444666097, 0.0, 0.0, 0.011835570115185459, 0.00491101100361864, 0.0, 0.0, 0.02039043040027805, 0.04625127942681678, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.08618625807996169, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8892953155740893, 0.571059330484052, 0.9769716580570452, 0.8937985620009558, 0.9416742740124843, 0.5764092068662352, 0.029689455124559207, nan, 0.22275742686034383, 0.0, 0.0, 0.06664539412877954, 0.9330468158719861, nan, 0.0, 0.005903258839828926, 0.0, 0.0, 0.026594481645058652, 0.005595759003702457, 0.0, nan, 0.021506252291149035, 0.05202811293146873, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.19731433269388873, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.3885 | 46.0 | 920 | 2.8785 | 0.0638 | 0.1202 | 0.4641 | [0.4201585638344481, 0.5331688332827749, 0.9046800875805863, 0.46508072779639426, 0.29490482161176335, 0.3960324658712993, 0.008586556818929924, 0.0, 0.15349019827803798, 0.0, 0.0, 0.04837349623355695, 0.553003003003003, 0.0, 0.0, 0.008513394716143062, 0.0, 0.0, 0.00783691959229898, 0.009046917736166631, 0.0, 0.0, 0.020112444212600708, 0.05622912285042682, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0786701459360201, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8733264148043871, 0.5635594380163996, 0.9772919523993331, 0.9017617904645306, 0.9574893291554613, 0.6135869016906158, 0.03461873886171463, nan, 0.2353801326781855, 0.0, 0.0, 0.05680867919545795, 0.9291454932064728, nan, 0.0, 0.01108366965845431, 0.0, 0.0, 0.016431590444982665, 0.010854930999663413, 0.0, nan, 0.021200765752922488, 0.06541293866487563, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.2230748150178131, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.8846 | 47.0 | 940 | 2.8586 | 0.0631 | 0.1209 | 0.4669 | [0.42126351151513125, 0.5619484959872791, 0.903077995061878, 0.45611847191059657, 0.2881597611048899, 0.4248252540044385, 0.007047834701386924, 0.0, 0.15031563312291124, 0.0, 0.0, 0.04944146019590714, 0.5475122942792562, 0.0, 0.0, 0.00926526452330214, 0.0, 0.0, 0.011057512280006168, 7.082152974504249e-05, 0.0, 0.0, 0.012477576240781343, 0.060697546763501246, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.06901712955263596, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.859326828398348, 0.6163245834587877, 0.9792547818302768, 0.9092954449044466, 0.9523327823147714, 0.60836227894441, 0.028324422705039244, nan, 0.23937983162557658, 0.0, 0.0, 0.058536155979050215, 0.9319025480232097, nan, 0.0, 0.012047467020059032, 0.0, 0.0, 0.02384005318896329, 8.41467519353753e-05, 0.0, nan, 0.012748971528654637, 0.06896303581280434, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.22745957796656618, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.5732 | 48.0 | 960 | 2.8888 | 0.0647 | 0.1203 | 0.4644 | [0.41528233229777933, 0.5486457742747138, 0.8974603436756327, 0.49323992025137897, 0.28029349194633785, 0.3990796269551808, 0.011545940690325718, 0.0, 0.15427116913204844, 0.0, 0.0, 0.048772232959555385, 0.5637565075892096, 0.0, 3.0967102614655695e-05, 0.01218417945690673, 0.0, 0.0, 0.011959057567414196, 0.0011873399873845126, 0.0, 0.0, 0.03245594044363415, 0.04846399903434124, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0902263705610986, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8764776972302485, 0.5789424291361829, 0.9818089238931367, 0.8993561759320414, 0.95571291110508, 0.5874493752071833, 0.04682819550297653, nan, 0.21765028438719666, 0.0, 0.0, 0.06025262972580432, 0.9239647529462645, nan, 3.0967102614655695e-05, 0.015541232455876151, 0.0, 0.0, 0.02702189295721138, 0.0013463480309660047, 0.0, nan, 0.034805099588611464, 0.05778502722540718, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.23047410249383393, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.7336 | 49.0 | 980 | 2.8839 | 0.0646 | 0.1219 | 0.4641 | [0.42124436649111535, 0.5461265174255733, 0.901155598467248, 0.44400077150438144, 0.30278251801289346, 0.4082709202643832, 0.014376293632285316, 0.0, 0.15661008171689247, 0.0, 0.0, 0.0460897067280046, 0.5467434298651567, 0.0, 0.00021673615984807828, 0.007981450929682972, 0.0, 0.0, 0.01957213510715792, 0.0010473946059177796, 0.0, 0.0, 0.03429535927588381, 0.051096956829440904, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.101363236587511, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8520718591048536, 0.5968788247300938, 0.9811519098576743, 0.9147134472778431, 0.9455725247341541, 0.6084992000922416, 0.05820346566564289, nan, 0.22110163751115508, 0.0, 0.0, 0.05291360415474671, 0.9377860669621941, nan, 0.0002167697183025899, 0.010782482982952835, 0.0, 0.0, 0.04848743885643729, 0.0011780545270952542, 0.0, nan, 0.03765630727872592, 0.060615510086593584, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.2526719649218964, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
| 1.4673 | 50.0 | 1000 | 2.9069 | 0.0635 | 0.1208 | 0.4617 | [0.41928714602092204, 0.5390214194468376, 0.9012448258150285, 0.4803948505715545, 0.286634627489022, 0.3955429610634541, 0.013755700264604875, 0.0, 0.15830383993025027, 0.0, 0.0, 0.05008873442525973, 0.5408022058058475, 0.0, 0.0, 0.015204209024433743, 0.0, 0.0, 0.011558359136104005, 0.005458675263774912, 0.0, 0.0, 0.02979897667822854, 0.04487737341772152, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.10603847090333576, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.8627759743647633, 0.5675831029534248, 0.9798871578394094, 0.9012946287138017, 0.9550220818632652, 0.5924650130435409, 0.05558715352822963, nan, 0.230359005236165, 0.0, 0.0, 0.059625456626028785, 0.9366147691642339, nan, 0.0, 0.02054093126920065, 0.0, 0.0, 0.0285415776226433, 0.00626893301918546, 0.0, nan, 0.03166877112948556, 0.05442682722060975, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.23869553302274596, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan] |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "other", "tags": ["generated_from_trainer"], "datasets": ["scene_parse_150"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-b0-scene-parse-150", "results": []}]}
|
sanya94/segformer-b0-scene-parse-150
| null |
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T02:59:35+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #segformer #generated_from_trainer #dataset-scene_parse_150 #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
|
segformer-b0-scene-parse-150
============================
This model is a fine-tuned version of nvidia/mit-b0 on the scene\_parse\_150 dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9069
* Mean Iou: 0.0635
* Mean Accuracy: 0.1208
* Overall Accuracy: 0.4617
* Per Category Iou: [0.41928714602092204, 0.5390214194468376, 0.9012448258150285, 0.4803948505715545, 0.286634627489022, 0.3955429610634541, 0.013755700264604875, 0.0, 0.15830383993025027, 0.0, 0.0, 0.05008873442525973, 0.5408022058058475, 0.0, 0.0, 0.015204209024433743, 0.0, 0.0, 0.011558359136104005, 0.005458675263774912, 0.0, 0.0, 0.02979897667822854, 0.04487737341772152, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.10603847090333576, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan]
* Per Category Accuracy: [0.8627759743647633, 0.5675831029534248, 0.9798871578394094, 0.9012946287138017, 0.9550220818632652, 0.5924650130435409, 0.05558715352822963, nan, 0.230359005236165, 0.0, 0.0, 0.059625456626028785, 0.9366147691642339, nan, 0.0, 0.02054093126920065, 0.0, 0.0, 0.0285415776226433, 0.00626893301918546, 0.0, nan, 0.03166877112948556, 0.05442682722060975, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.23869553302274596, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan]
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 50
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #segformer #generated_from_trainer #dataset-scene_parse_150 #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 50",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|

## VAGO solutions SauerkrautLM-Qwen-32b
Introducing **SauerkrautLM-Qwen-32b** – our Sauerkraut version of the powerful [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)!
The model **SauerkrautLM-Qwen-32b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
- Finetuned with **SFT**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Qwen-32b](#all-SauerkrautLM-Qwen-32b)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Qwen-32b
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Qwen-32b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Qwen-32b) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Qwen-32b**
- **Model Type:** SauerkrautLM-Qwen-32b is a finetuned Model based on [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)
- **Language(s):** German, English
- **License:** [tongyi-qianwen-research](https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSEs)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model for 2 epochs on 160k data samples with SFT.
- Afterwards we applied DPO for 1 epoch with 110k data.
- LaserRMT version coming soon
**We teached German language skills on this model.** As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
**English:**
```
<|im_start|>system
You are SauerkrautLM, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
**German:**
```
<|im_start|>system
Du bist SauerkrautLM, ein hilfreicher und freundlicher KI-Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Example output of german language:
```
Die Karte zeigte eine verborgene Höhle, die in den umliegenden Bergen zu finden war. Mia war von der Idee fasziniert, diese Höhle zu entdecken und ihre Geheimnisse zu lüften. Sie nahm die Karte mit sich und machte sich auf den Weg, um die Höhle zu finden.
Die Wanderung zu den Bergen war eine Herausforderung, aber Mia war fest entschlossen, ihr Abenteuer zu vollenden. Sie überwand steinige Wege und überquerte klirrende Bäche, die ihre Füße kühlten und ihr die Energie für den Rest des Weges gab.
Endlich erreichte Mia die Höhle, die von einem dichten Wald umgeben war. Die Höhle war ein Ort der Geheimnisse und des Staunens, der ihr Herz höher schlagen ließ. Sie betrat die Höhle, und die Dunkelheit umhüllte sie wie ein Schleier aus Stille.
In der Höhle fand Mia eine alte Schatzkiste, die mit einem alten, verwitterten Holz verziert war. Mit zitternden Händen öffnete sie die Schatzkiste und fand darin eine alte, zerfledderte Schriftrolle. Die Schriftrolle war ein geheimnisvolles Artefakt, das ihr die Geschichte der Höhle offenbarte.
```
## Evaluation
**Open LLM Leaderboard:**
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **73.11** |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 82.32 |
| MMLU (5-shot) | 74.40|
| TruthfulQA (0-shot) | 61.03 |
| Winogrande (5-shot) | 82.16 |
| GSM8K (5-shot) | 79.53 |
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community
|
{"language": ["de", "en"], "license": "other", "tags": ["sft", "dpo"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE"}
|
blockblockblock/SauerkrautLM-Qwen-32b-bpw2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"sft",
"dpo",
"conversational",
"de",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:00:30+00:00
|
[] |
[
"de",
"en"
] |
TAGS
#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!SauerkrautLM
VAGO solutions SauerkrautLM-Qwen-32b
------------------------------------
Introducing SauerkrautLM-Qwen-32b – our Sauerkraut version of the powerful Qwen/Qwen1.5-32B!
The model SauerkrautLM-Qwen-32b is a joint effort between VAGO solutions and URL.
* Finetuned with SFT
* Aligned with DPO
Table of Contents
=================
1. Overview of all SauerkrautLM-Qwen-32b
2. Model Details
* Prompt template
* Training procedure
3. Evaluation
4. Disclaimer
5. Contact
6. Collaborations
7. Acknowledgement
All SauerkrautLM-Qwen-32b
-------------------------
Model Details
-------------
SauerkrautLM-Qwen-32b
* Model Type: SauerkrautLM-Qwen-32b is a finetuned Model based on Qwen/Qwen1.5-32B
* Language(s): German, English
* License: tongyi-qianwen-research
* Contact: VAGO solutions, URL
### Training procedure:
* We trained this model for 2 epochs on 160k data samples with SFT.
* Afterwards we applied DPO for 1 epoch with 110k data.
* LaserRMT version coming soon
We teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
English:
German:
### Example output of german language:
Evaluation
----------
Open LLM Leaderboard:
Disclaimer
----------
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
Contact
-------
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
Collaborations
--------------
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer
Acknowledgement
---------------
Many thanks to Qwen for providing such valuable model to the Open-Source community
|
[
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
|
Taf2023/mergekit-slerp-xvskemx
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:03:18+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* NousResearch/Hermes-2-Pro-Mistral-7B
* WizardLM/WizardMath-7B-V1.1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# style-dailymed-from-facebook
This model is a fine-tuned version of [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "facebook/opt-2.7b", "model-index": [{"name": "style-dailymed-from-facebook", "results": []}]}
|
RuoxiL/style-dailymed-from-facebook
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/opt-2.7b",
"license:other",
"region:us"
] | null |
2024-04-15T03:05:11+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-facebook/opt-2.7b #license-other #region-us
|
# style-dailymed-from-facebook
This model is a fine-tuned version of facebook/opt-2.7b on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# style-dailymed-from-facebook\n\nThis model is a fine-tuned version of facebook/opt-2.7b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-facebook/opt-2.7b #license-other #region-us \n",
"# style-dailymed-from-facebook\n\nThis model is a fine-tuned version of facebook/opt-2.7b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 3\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MLMA_Lab_8_GPT_model_Task5
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1759
- Precision: 0.5233
- Recall: 0.6157
- F1: 0.5657
- Accuracy: 0.9544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 474 | 0.1577 | 0.4240 | 0.6032 | 0.4979 | 0.9447 |
| 0.0431 | 2.0 | 948 | 0.1545 | 0.4980 | 0.6269 | 0.5551 | 0.9534 |
| 0.0653 | 3.0 | 1422 | 0.1759 | 0.5233 | 0.6157 | 0.5657 | 0.9544 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "MLMA_Lab_8_GPT_model_Task5", "results": []}]}
|
shubhanmathur/MLMA_Lab_8_GPT_model_Task5
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:06:40+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
MLMA\_Lab\_8\_GPT\_model\_Task5
===============================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1759
* Precision: 0.5233
* Recall: 0.6157
* F1: 0.5657
* Accuracy: 0.9544
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.0
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spark-name-ja-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7365
- Bleu: 17.2754
- Gen Len: 6.3357
# japan Names to English Translation Model
## Model Overview
This translation model is specifically designed to accurately and fluently translate japan names and surnames into English.
## Intended Uses and Limitations
This model is built for Spark IT enterprise looking to automate the translation process of japan names and surnames into English.
## Training and Evaluation Data
This model has been trained on a diverse dataset consisting of over 144,56 lines of data, encompassing a wide range of Hindi names and surnames along with their English counterparts. Evaluation data has been carefully selected to ensure reliable and accurate translation performance.
## Training Procedure
- 1 days of training
### Hardware Environment:
- Azure Studio
- Standard_DS12_v2
- 4 cores, 28GB RAM, 56GB storage
- Data manipulation and training on medium-sized datasets (1-10GB)
- 6 cores
- Loss: 0.4618
- Bleu: 70.7674
- Gen Len: 10.2548
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 2.8748 | 1.0 | 1750 | 2.9016 | 16.3954 | 6.1249 |
| 2.3245 | 2.0 | 3500 | 2.7663 | 16.9405 | 6.216 |
| 2.0804 | 3.0 | 5250 | 2.7365 | 17.2754 | 6.3357 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "Helsinki-NLP/opus-mt-ja-en", "model-index": [{"name": "spark-name-ja-to-en", "results": []}]}
|
ihebaker10/spark-name-ja-to-en
| null |
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ja-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:10:51+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #marian #text2text-generation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-ja-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
spark-name-ja-to-en
===================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-ja-en on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7365
* Bleu: 17.2754
* Gen Len: 6.3357
japan Names to English Translation Model
========================================
Model Overview
--------------
This translation model is specifically designed to accurately and fluently translate japan names and surnames into English.
Intended Uses and Limitations
-----------------------------
This model is built for Spark IT enterprise looking to automate the translation process of japan names and surnames into English.
Training and Evaluation Data
----------------------------
This model has been trained on a diverse dataset consisting of over 144,56 lines of data, encompassing a wide range of Hindi names and surnames along with their English counterparts. Evaluation data has been carefully selected to ensure reliable and accurate translation performance.
Training Procedure
------------------
* 1 days of training
### Hardware Environment:
* Azure Studio
* Standard\_DS12\_v2
* 4 cores, 28GB RAM, 56GB storage
* Data manipulation and training on medium-sized datasets (1-10GB)
* 6 cores
* Loss: 0.4618
* Bleu: 70.7674
* Gen Len: 10.2548
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.1
* Pytorch 2.2.1+cpu
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Hardware Environment:\n\n\n* Azure Studio\n* Standard\\_DS12\\_v2\n* 4 cores, 28GB RAM, 56GB storage\n* Data manipulation and training on medium-sized datasets (1-10GB)\n* 6 cores\n* Loss: 0.4618\n* Bleu: 70.7674\n* Gen Len: 10.2548",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #marian #text2text-generation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-ja-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Hardware Environment:\n\n\n* Azure Studio\n* Standard\\_DS12\\_v2\n* 4 cores, 28GB RAM, 56GB storage\n* Data manipulation and training on medium-sized datasets (1-10GB)\n* 6 cores\n* Loss: 0.4618\n* Bleu: 70.7674\n* Gen Len: 10.2548",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
feature-extraction
|
transformers
|
# usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "mssma/ko-solar-10.7b-v0.1"
model = AutoModelForCausalLM.from_pretrained(
path,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
```
|
{"language": ["ko"], "license": "apache-2.0", "library_name": "transformers"}
|
mssma/ko-solar-10.7b-v0.1
| null |
[
"transformers",
"safetensors",
"llama",
"feature-extraction",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:11:44+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #safetensors #llama #feature-extraction #ko #license-apache-2.0 #endpoints_compatible #text-generation-inference #region-us
|
# usage
|
[
"# usage"
] |
[
"TAGS\n#transformers #safetensors #llama #feature-extraction #ko #license-apache-2.0 #endpoints_compatible #text-generation-inference #region-us \n",
"# usage"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_translator
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3036
- Bleu: 20.6458
- Gen Len: 18.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.5155 | 1.0 | 1556 | 1.3228 | 17.8685 | 18.5208 |
| 1.2945 | 2.0 | 3112 | 1.2303 | 18.5903 | 18.5401 |
| 1.1669 | 3.0 | 4668 | 1.2021 | 19.0779 | 18.5258 |
| 1.0522 | 4.0 | 6224 | 1.1794 | 19.41 | 18.5329 |
| 0.9606 | 5.0 | 7780 | 1.1635 | 19.6192 | 18.5289 |
| 0.8903 | 6.0 | 9336 | 1.1702 | 19.894 | 18.515 |
| 0.8152 | 7.0 | 10892 | 1.1734 | 19.9585 | 18.5129 |
| 0.7499 | 8.0 | 12448 | 1.1959 | 20.1959 | 18.5369 |
| 0.7078 | 9.0 | 14004 | 1.2016 | 20.1621 | 18.5272 |
| 0.6623 | 10.0 | 15560 | 1.2251 | 20.2858 | 18.515 |
| 0.6114 | 11.0 | 17116 | 1.2415 | 20.4039 | 18.5227 |
| 0.5742 | 12.0 | 18672 | 1.2607 | 20.5759 | 18.5248 |
| 0.5333 | 13.0 | 20228 | 1.2762 | 20.5848 | 18.5142 |
| 0.5134 | 14.0 | 21784 | 1.2900 | 20.5416 | 18.517 |
| 0.4932 | 15.0 | 23340 | 1.3036 | 20.6458 | 18.52 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["generator"], "metrics": ["bleu"], "base_model": "google-t5/t5-small", "model-index": [{"name": "my_translator", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "generator", "type": "generator", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "bleu", "value": 20.6458, "name": "Bleu"}]}]}]}
|
jsphelps12/my_translator
| null |
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:generator",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:13:24+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #dataset-generator #base_model-google-t5/t5-small #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
my\_translator
==============
This model is a fine-tuned version of google-t5/t5-small on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3036
* Bleu: 20.6458
* Gen Len: 18.52
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #dataset-generator #base_model-google-t5/t5-small #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-800k-epoch3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Lichang-Chen/800k_ift dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7384 | 1.0 | 1179 | 6.6521 |
| 3.848 | 2.0 | 2358 | 3.8441 |
| 3.245 | 3.0 | 3537 | 3.2540 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["Lichang-Chen/800k_ift"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "zephyr-7b-sft-800k-epoch3", "results": []}]}
|
Lichang-Chen/zephyr-7b-sft-800k-epoch3
| null |
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:Lichang-Chen/800k_ift",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:13:51+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-Lichang-Chen/800k_ift #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
zephyr-7b-sft-800k-epoch3
=========================
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the Lichang-Chen/800k\_ift dataset.
It achieves the following results on the evaluation set:
* Loss: 3.2540
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-Lichang-Chen/800k_ift #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "251.88 +/- 21.17", "name": "mean_reward", "verified": false}]}]}]}
|
WharfRat/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T03:14:58+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Ruiz3/phi-2-kingshipAIv5-interpreter
| null |
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:18:53+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Model Details
공개된 한국어, 영어 데이터셋으로 파인튜닝하였습니다.
### Model Description
BASE MODEL : [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- fine-tuned the mistralai/Mistral-7B-Instruct-v0.2 model.
This model is fine-tuned on the Mistral-7B-Instruct-v0.2 to enhance its performance for specific tasks. During the training phase, iam utilized the Axolotl library
### Applications
This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.
### Limitations and Considerations
While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.
|
{"language": ["ko", "en"], "license": "apache-2.0", "library_name": "transformers"}
|
CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null |
2024-04-15T03:21:59+00:00
|
[] |
[
"ko",
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
<img src="URL alt="Built with Axolotl" width="200" height="32"/>
## Model Details
공개된 한국어, 영어 데이터셋으로 파인튜닝하였습니다.
### Model Description
BASE MODEL : mistralai/Mistral-7B-Instruct-v0.2
- fine-tuned the mistralai/Mistral-7B-Instruct-v0.2 model.
This model is fine-tuned on the Mistral-7B-Instruct-v0.2 to enhance its performance for specific tasks. During the training phase, iam utilized the Axolotl library
### Applications
This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.
### Limitations and Considerations
While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.
|
[
"## Model Details\n\n공개된 한국어, 영어 데이터셋으로 파인튜닝하였습니다.",
"### Model Description\nBASE MODEL : mistralai/Mistral-7B-Instruct-v0.2\n\n- fine-tuned the mistralai/Mistral-7B-Instruct-v0.2 model.\n\nThis model is fine-tuned on the Mistral-7B-Instruct-v0.2 to enhance its performance for specific tasks. During the training phase, iam utilized the Axolotl library",
"### Applications\nThis fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.",
"### Limitations and Considerations\nWhile our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## Model Details\n\n공개된 한국어, 영어 데이터셋으로 파인튜닝하였습니다.",
"### Model Description\nBASE MODEL : mistralai/Mistral-7B-Instruct-v0.2\n\n- fine-tuned the mistralai/Mistral-7B-Instruct-v0.2 model.\n\nThis model is fine-tuned on the Mistral-7B-Instruct-v0.2 to enhance its performance for specific tasks. During the training phase, iam utilized the Axolotl library",
"### Applications\nThis fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.",
"### Limitations and Considerations\nWhile our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements."
] |
text-generation
|
transformers
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
|
shaswatamitra/mistral-7b-v2-finetuned2
| null |
[
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:23:15+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
|
[
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
[
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-squad2", "results": []}]}
|
vubacktracking/distilbert-base-uncased-finetuned-squad2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:25:02+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-base-uncased-finetuned-squad2
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# distilbert-base-uncased-finetuned-squad2\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-finetuned-squad2\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
shallow6414/zcotlf7
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:26:23+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-160m-v0-finetuned-squad
This model is a fine-tuned version of [EleutherAI/pythia-160m-v0](https://huggingface.co/EleutherAI/pythia-160m-v0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7926 | 1.0 | 5539 | 4.7825 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m-v0", "model-index": [{"name": "pythia-160m-v0-finetuned-squad", "results": []}]}
|
K-kiron/pythia-160m-v0-finetuned-squad
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m-v0",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T03:26:24+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-EleutherAI/pythia-160m-v0 #license-apache-2.0 #region-us
|
pythia-160m-v0-finetuned-squad
==============================
This model is a fine-tuned version of EleutherAI/pythia-160m-v0 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.7825
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.7.2.dev0
* Transformers 4.36.2
* Pytorch 2.2.1+cu121
* Datasets 2.16.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.2.dev0\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-EleutherAI/pythia-160m-v0 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.2.dev0\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_lora_r8_2e4_e3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_lora_r8_2e4_e3", "results": []}]}
|
fangzhaoz/mistralv1_lora_r8_2e4_e3
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T03:27:19+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# mistralv1_lora_r8_2e4_e3
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# mistralv1_lora_r8_2e4_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# mistralv1_lora_r8_2e4_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
fangzhaoz/mistralv1_lora_r8_2e4_e3_merged
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:27:39+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7866 | 1.0 | 20 | 0.7343 |
| 0.6662 | 2.0 | 40 | 0.6764 |
| 0.6019 | 3.0 | 60 | 0.6573 |
| 0.56 | 4.0 | 80 | 0.6523 |
| 0.4894 | 5.0 | 100 | 0.6546 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.1", "model-index": [{"name": "mistral_instruct_generation", "results": []}]}
|
adil0101/mistral_instruct_generation
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T03:30:19+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us
|
mistral\_instruct\_generation
=============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6546
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.0
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.19.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
thienan092/mistral_7b_thienan
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:31:37+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: beowolx/MistralHermes-CodePro-7B-v1
layer_range: [0, 32]
- model: beowolx/CodeNinja-1.0-OpenChat-7B
layer_range: [0, 32]
merge_method: slerp
base_model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["beowolx/CodeNinja-1.0-OpenChat-7B", "beowolx/MistralHermes-CodePro-7B-v1"]}
|
K00B404/BagOClownCoders-slerp-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:32:24+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-beowolx/CodeNinja-1.0-OpenChat-7B #base_model-beowolx/MistralHermes-CodePro-7B-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* beowolx/CodeNinja-1.0-OpenChat-7B
* beowolx/MistralHermes-CodePro-7B-v1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* beowolx/CodeNinja-1.0-OpenChat-7B\n* beowolx/MistralHermes-CodePro-7B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-beowolx/CodeNinja-1.0-OpenChat-7B #base_model-beowolx/MistralHermes-CodePro-7B-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* beowolx/CodeNinja-1.0-OpenChat-7B\n* beowolx/MistralHermes-CodePro-7B-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Yasusan/Llama2_121
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:35:25+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["trl", "sft"]}
|
rainerberger/planetn5
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:37:41+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1) as a base.
### Models Merged
The following models were included in the merge:
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: beowolx/MistralHermes-CodePro-7B-v1
# # no parameters necessary for base model
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.5
weight: 0.5
- model: beowolx/CodeNinja-1.0-OpenChat-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
normalize: true
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["beowolx/MistralHermes-CodePro-7B-v1", "beowolx/CodeNinja-1.0-OpenChat-7B", "teknium/OpenHermes-2.5-Mistral-7B"]}
|
K00B404/BagOClownCoders-ties-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:41:06+00:00
|
[
"2306.01708"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-beowolx/MistralHermes-CodePro-7B-v1 #base_model-beowolx/CodeNinja-1.0-OpenChat-7B #base_model-teknium/OpenHermes-2.5-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using beowolx/MistralHermes-CodePro-7B-v1 as a base.
### Models Merged
The following models were included in the merge:
* beowolx/CodeNinja-1.0-OpenChat-7B
* teknium/OpenHermes-2.5-Mistral-7B
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using beowolx/MistralHermes-CodePro-7B-v1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* beowolx/CodeNinja-1.0-OpenChat-7B\n* teknium/OpenHermes-2.5-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-beowolx/MistralHermes-CodePro-7B-v1 #base_model-beowolx/CodeNinja-1.0-OpenChat-7B #base_model-teknium/OpenHermes-2.5-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using beowolx/MistralHermes-CodePro-7B-v1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* beowolx/CodeNinja-1.0-OpenChat-7B\n* teknium/OpenHermes-2.5-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Grayx/sad_pepe_14
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:42:22+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Original Model Card
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
<p>
# EEVE-Korean-Instruct-10.8B-v1.0
## Join Our Community on Discord!
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
## Our Dedicated Team (Alphabetical Order)
| Research | Engineering | Product Management | UX Design |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
| Seungduk Kim | Rifqi Alfi | | |
| Seungtaek Choi | Sanghoon Han | | |
| | Suhyun Kang | | |
## About the Model
This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0), which is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). Specifically, we utilized Direct Preference Optimization (DPO) through the use of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
## Prompt Template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {prompt}
Assistant:
```
## How to Use it
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-10.8B-v1.0")
tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-10.8B-v1.0")
prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
text = '한국의 수도는 어디인가요? 아래 선택지 중 골라주세요.\n\n(A) 경성\n(B) 부산\n(C) 평양\n(D) 서울\n(E) 전주'
model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')
outputs = model.generate(**model_inputs, max_new_tokens=256)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(output_text)
```
### Example Output
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: 한국의 수도는 어디인가요? 아래 선택지 중 골라주세요.
(A) 경성
(B) 부산
(C) 평양
(D) 서울
(E) 전주
Assistant:
(D) 서울이 한국의 수도입니다. 서울은 나라의 북동부에 위치해 있으며, 정치, 경제, 문화의 중심지입니다. 약 1,000만 명이 넘는 인구를 가진 세계에서 가장 큰 도시 중 하나입니다. 서울은 높은 빌딩, 현대적인 인프라, 활기 문화 장면으로 유명합니다. 또한, 많은 역사적 명소와 박물관이 있어 방문객들에게 풍부한 문화 체험을 제공합니다.
```
### Training Data
- Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
- No other dataset was used
## Citation
```
@misc{kim2024efficient,
title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
year={2024},
eprint={2402.14714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{SlimOrcaDedup,
title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos},
year = {2023},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/}
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yanolja__EEVE-Korean-Instruct-10.8B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.48|
|AI2 Reasoning Challenge (25-Shot)|64.85|
|HellaSwag (10-Shot) |83.04|
|MMLU (5-Shot) |64.23|
|TruthfulQA (0-shot) |54.09|
|Winogrande (5-shot) |81.93|
|GSM8k (5-shot) |50.72|
|
{"license": "apache-2.0", "base_model": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0"}
|
maywell/EEVE-Korean-Instruct-10.8B-v1.0-32k
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:2402.14714",
"arxiv:2310.01377",
"arxiv:2306.02707",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:43:34+00:00
|
[
"2402.14714",
"2310.01377",
"2306.02707"
] |
[] |
TAGS
#transformers #pytorch #llama #text-generation #conversational #arxiv-2402.14714 #arxiv-2310.01377 #arxiv-2306.02707 #base_model-yanolja/EEVE-Korean-Instruct-10.8B-v1.0 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Original Model Card
===================
<img src="URL alt="Built with Axolotl" width="200" height="32"/>

EEVE-Korean-Instruct-10.8B-v1.0
===============================
Join Our Community on Discord!
------------------------------
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: Discord Link.
Our Dedicated Team (Alphabetical Order)
---------------------------------------
About the Model
---------------
This model is a fine-tuned version of yanolja/EEVE-Korean-10.8B-v1.0, which is a Korean vocabulary-extended version of upstage/SOLAR-10.7B-v1.0. Specifically, we utilized Direct Preference Optimization (DPO) through the use of Axolotl.
For more details, please refer to our technical report: Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models.
Prompt Template
---------------
How to Use it
-------------
### Example Output
### Training Data
* Korean-translated version of Open-Orca/SlimOrca-Dedup
* Korean-translated version of argilla/ultrafeedback-binarized-preferences-cleaned
* No other dataset was used
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
|
[
"### Example Output",
"### Training Data\n\n\n* Korean-translated version of Open-Orca/SlimOrca-Dedup\n* Korean-translated version of argilla/ultrafeedback-binarized-preferences-cleaned\n* No other dataset was used\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #conversational #arxiv-2402.14714 #arxiv-2310.01377 #arxiv-2306.02707 #base_model-yanolja/EEVE-Korean-Instruct-10.8B-v1.0 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Example Output",
"### Training Data\n\n\n* Korean-translated version of Open-Orca/SlimOrca-Dedup\n* Korean-translated version of argilla/ultrafeedback-binarized-preferences-cleaned\n* No other dataset was used\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-chinese
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-chinese", "results": []}]}
|
kaierlong/gemma-chinese
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null |
2024-04-15T03:47:59+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
|
# gemma-chinese
This model is a fine-tuned version of google/gemma-2b on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"# gemma-chinese\n\nThis model is a fine-tuned version of google/gemma-2b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.38.1\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n",
"# gemma-chinese\n\nThis model is a fine-tuned version of google/gemma-2b on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.38.1\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
thusinh1969/LLaMA-2-finetune-50k-checkpoint28100-ep1.42
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:50:44+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2](https://huggingface.co/K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2) as a base.
### Models Merged
The following models were included in the merge:
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2
# # no parameters necessary for base model
- model: Nexusflow/Starling-LM-7B-beta
parameters:
density: 0.5
weight: 0.5
- model: beowolx/CodeNinja-1.0-OpenChat-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2
parameters:
normalize: true
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2", "Nexusflow/Starling-LM-7B-beta", "beowolx/CodeNinja-1.0-OpenChat-7B"]}
|
K00B404/BagOMistral_14X_Coders-ties-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2",
"base_model:Nexusflow/Starling-LM-7B-beta",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:51:06+00:00
|
[
"2306.01708"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2 #base_model-Nexusflow/Starling-LM-7B-beta #base_model-beowolx/CodeNinja-1.0-OpenChat-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the TIES merge method using K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2 as a base.
### Models Merged
The following models were included in the merge:
* Nexusflow/Starling-LM-7B-beta
* beowolx/CodeNinja-1.0-OpenChat-7B
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Nexusflow/Starling-LM-7B-beta\n* beowolx/CodeNinja-1.0-OpenChat-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2306.01708 #base_model-K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2 #base_model-Nexusflow/Starling-LM-7B-beta #base_model-beowolx/CodeNinja-1.0-OpenChat-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the TIES merge method using K00B404/Merged_Beowolx-CodePro_Medusa2-14X-7B-Mistral-I-v0-2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Nexusflow/Starling-LM-7B-beta\n* beowolx/CodeNinja-1.0-OpenChat-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
SuperPowerMz/Mistral-7B-QLoRA-Peft
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T03:53:56+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-text-to-text
|
transformers
|
4-bit AWQ-quantized version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b). Refer to the original model's card for more information (including inference snippet).
|
{"language": ["en"], "license": "apache-2.0", "tags": ["multimodal", "vision", "image-text-to-text", "quantized", "4-bit", "AWQ"], "datasets": ["HuggingFaceM4/OBELICS", "laion/laion-coco", "wikipedia", "facebook/pmd", "pixparse/idl-wds", "pixparse/pdfa-eng-wds", "wendlerc/RenderedText", "HuggingFaceM4/the_cauldron", "teknium/OpenHermes-2.5", "GAIR/lima", "databricks/databricks-dolly-15k", "meta-math/MetaMathQA", "TIGER-Lab/MathInstruct", "microsoft/orca-math-word-problems-200k", "camel-ai/math", "AtlasUnified/atlas-math-sets", "tiedong/goat"]}
|
HuggingFaceM4/idefics2-8b-AWQ
| null |
[
"transformers",
"safetensors",
"idefics2",
"pretraining",
"multimodal",
"vision",
"image-text-to-text",
"quantized",
"4-bit",
"AWQ",
"en",
"dataset:HuggingFaceM4/OBELICS",
"dataset:laion/laion-coco",
"dataset:wikipedia",
"dataset:facebook/pmd",
"dataset:pixparse/idl-wds",
"dataset:pixparse/pdfa-eng-wds",
"dataset:wendlerc/RenderedText",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:teknium/OpenHermes-2.5",
"dataset:GAIR/lima",
"dataset:databricks/databricks-dolly-15k",
"dataset:meta-math/MetaMathQA",
"dataset:TIGER-Lab/MathInstruct",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:camel-ai/math",
"dataset:AtlasUnified/atlas-math-sets",
"dataset:tiedong/goat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:55:40+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #idefics2 #pretraining #multimodal #vision #image-text-to-text #quantized #4-bit #AWQ #en #dataset-HuggingFaceM4/OBELICS #dataset-laion/laion-coco #dataset-wikipedia #dataset-facebook/pmd #dataset-pixparse/idl-wds #dataset-pixparse/pdfa-eng-wds #dataset-wendlerc/RenderedText #dataset-HuggingFaceM4/the_cauldron #dataset-teknium/OpenHermes-2.5 #dataset-GAIR/lima #dataset-databricks/databricks-dolly-15k #dataset-meta-math/MetaMathQA #dataset-TIGER-Lab/MathInstruct #dataset-microsoft/orca-math-word-problems-200k #dataset-camel-ai/math #dataset-AtlasUnified/atlas-math-sets #dataset-tiedong/goat #license-apache-2.0 #endpoints_compatible #region-us
|
4-bit AWQ-quantized version of HuggingFaceM4/idefics2-8b. Refer to the original model's card for more information (including inference snippet).
|
[] |
[
"TAGS\n#transformers #safetensors #idefics2 #pretraining #multimodal #vision #image-text-to-text #quantized #4-bit #AWQ #en #dataset-HuggingFaceM4/OBELICS #dataset-laion/laion-coco #dataset-wikipedia #dataset-facebook/pmd #dataset-pixparse/idl-wds #dataset-pixparse/pdfa-eng-wds #dataset-wendlerc/RenderedText #dataset-HuggingFaceM4/the_cauldron #dataset-teknium/OpenHermes-2.5 #dataset-GAIR/lima #dataset-databricks/databricks-dolly-15k #dataset-meta-math/MetaMathQA #dataset-TIGER-Lab/MathInstruct #dataset-microsoft/orca-math-word-problems-200k #dataset-camel-ai/math #dataset-AtlasUnified/atlas-math-sets #dataset-tiedong/goat #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
mratcheva/crsmgr_bert_distr_gen
| null |
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T03:58:53+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #fill-mask #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
bdsaglam/llama-2-7b-chat-jerx-peft-qdiovveg
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T04:00:19+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
abhayesian/BobzillaV23
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T04:00:31+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-filtered-negative", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.005-filtered-negative
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T04:00:52+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-filtered-negative
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.005-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.