pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
token-classification | transformers |
RUPunct_small - самая маленькая модель из семейства RUPunct. Идеально подходит для несложных текстов и там, где требуется высокая скорость работы на CPU.
Код инференса:
```py
from transformers import pipeline
from transformers import AutoTokenizer
pt = "RUPunct/RUPunct_small"
tk = AutoTokenizer.from_pretrained(pt, strip_accents=False, add_prefix_space=True)
classifier = pipeline("ner", model=pt, tokenizer=tk, aggregation_strategy="first")
def process_token(token, label):
if label == "LOWER_O":
return token
if label == "LOWER_PERIOD":
return token + "."
if label == "LOWER_COMMA":
return token + ","
if label == "LOWER_QUESTION":
return token + "?"
if label == "LOWER_TIRE":
return token + "—"
if label == "LOWER_DVOETOCHIE":
return token + ":"
if label == "LOWER_VOSKL":
return token + "!"
if label == "LOWER_PERIODCOMMA":
return token + ";"
if label == "LOWER_DEFIS":
return token + "-"
if label == "LOWER_MNOGOTOCHIE":
return token + "..."
if label == "LOWER_QUESTIONVOSKL":
return token + "?!"
if label == "UPPER_O":
return token.capitalize()
if label == "UPPER_PERIOD":
return token.capitalize() + "."
if label == "UPPER_COMMA":
return token.capitalize() + ","
if label == "UPPER_QUESTION":
return token.capitalize() + "?"
if label == "UPPER_TIRE":
return token.capitalize() + " —"
if label == "UPPER_DVOETOCHIE":
return token.capitalize() + ":"
if label == "UPPER_VOSKL":
return token.capitalize() + "!"
if label == "UPPER_PERIODCOMMA":
return token.capitalize() + ";"
if label == "UPPER_DEFIS":
return token.capitalize() + "-"
if label == "UPPER_MNOGOTOCHIE":
return token.capitalize() + "..."
if label == "UPPER_QUESTIONVOSKL":
return token.capitalize() + "?!"
if label == "UPPER_TOTAL_O":
return token.upper()
if label == "UPPER_TOTAL_PERIOD":
return token.upper() + "."
if label == "UPPER_TOTAL_COMMA":
return token.upper() + ","
if label == "UPPER_TOTAL_QUESTION":
return token.upper() + "?"
if label == "UPPER_TOTAL_TIRE":
return token.upper() + " —"
if label == "UPPER_TOTAL_DVOETOCHIE":
return token.upper() + ":"
if label == "UPPER_TOTAL_VOSKL":
return token.upper() + "!"
if label == "UPPER_TOTAL_PERIODCOMMA":
return token.upper() + ";"
if label == "UPPER_TOTAL_DEFIS":
return token.upper() + "-"
if label == "UPPER_TOTAL_MNOGOTOCHIE":
return token.upper() + "..."
if label == "UPPER_TOTAL_QUESTIONVOSKL":
return token.upper() + "?!"
while 1:
input_text = input(":> ")
preds = classifier(input_text)
output = ""
for item in preds:
output += " " + process_token(item['word'].strip(), item['entity_group'])
print(">>>", output)
``` | {"language": ["ru"], "license": "mit"} | RUPunct/RUPunct_small | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-05-01T14:15:20+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #bert #token-classification #ru #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
RUPunct_small - самая маленькая модель из семейства RUPunct. Идеально подходит для несложных текстов и там, где требуется высокая скорость работы на CPU.
Код инференса:
| [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #ru #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null | transformers |
# Uploaded model
- **Developed by:** felixml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | felixml/Llama-3-8B-synthetic_text_to_sql-60-steps-fp16-gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:15:33+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: felixml
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: felixml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: felixml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ehsan-Tavan/Generative-AV-LLaMA-2-7b | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:15:55+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_extratranslations
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "mbart_extratranslations", "results": []}]} | NegarSH/mbart_extratranslations | null | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:16:45+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# mbart_extratranslations
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# mbart_extratranslations\n\nThis model was trained from scratch on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# mbart_extratranslations\n\nThis model was trained from scratch on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
token-classification | transformers |
RUPunct_medium - средняя модель из семейства RUPunct. Баланс между производительностью и качеством.
Код инференса:
```py
from transformers import pipeline
from transformers import AutoTokenizer
pt = "RUPunct/RUPunct_medium"
tk = AutoTokenizer.from_pretrained(pt, strip_accents=False, add_prefix_space=True)
classifier = pipeline("ner", model=pt, tokenizer=tk, aggregation_strategy="first")
def process_token(token, label):
if label == "LOWER_O":
return token
if label == "LOWER_PERIOD":
return token + "."
if label == "LOWER_COMMA":
return token + ","
if label == "LOWER_QUESTION":
return token + "?"
if label == "LOWER_TIRE":
return token + "—"
if label == "LOWER_DVOETOCHIE":
return token + ":"
if label == "LOWER_VOSKL":
return token + "!"
if label == "LOWER_PERIODCOMMA":
return token + ";"
if label == "LOWER_DEFIS":
return token + "-"
if label == "LOWER_MNOGOTOCHIE":
return token + "..."
if label == "LOWER_QUESTIONVOSKL":
return token + "?!"
if label == "UPPER_O":
return token.capitalize()
if label == "UPPER_PERIOD":
return token.capitalize() + "."
if label == "UPPER_COMMA":
return token.capitalize() + ","
if label == "UPPER_QUESTION":
return token.capitalize() + "?"
if label == "UPPER_TIRE":
return token.capitalize() + " —"
if label == "UPPER_DVOETOCHIE":
return token.capitalize() + ":"
if label == "UPPER_VOSKL":
return token.capitalize() + "!"
if label == "UPPER_PERIODCOMMA":
return token.capitalize() + ";"
if label == "UPPER_DEFIS":
return token.capitalize() + "-"
if label == "UPPER_MNOGOTOCHIE":
return token.capitalize() + "..."
if label == "UPPER_QUESTIONVOSKL":
return token.capitalize() + "?!"
if label == "UPPER_TOTAL_O":
return token.upper()
if label == "UPPER_TOTAL_PERIOD":
return token.upper() + "."
if label == "UPPER_TOTAL_COMMA":
return token.upper() + ","
if label == "UPPER_TOTAL_QUESTION":
return token.upper() + "?"
if label == "UPPER_TOTAL_TIRE":
return token.upper() + " —"
if label == "UPPER_TOTAL_DVOETOCHIE":
return token.upper() + ":"
if label == "UPPER_TOTAL_VOSKL":
return token.upper() + "!"
if label == "UPPER_TOTAL_PERIODCOMMA":
return token.upper() + ";"
if label == "UPPER_TOTAL_DEFIS":
return token.upper() + "-"
if label == "UPPER_TOTAL_MNOGOTOCHIE":
return token.upper() + "..."
if label == "UPPER_TOTAL_QUESTIONVOSKL":
return token.upper() + "?!"
while 1:
input_text = input(":> ")
preds = classifier(input_text)
output = ""
for item in preds:
output += " " + process_token(item['word'].strip(), item['entity_group'])
print(">>>", output)
``` | {"language": ["ru"], "license": "mit"} | RUPunct/RUPunct_medium | null | [
"transformers",
"pytorch",
"electra",
"token-classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-05-01T14:17:16+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #electra #token-classification #ru #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
RUPunct_medium - средняя модель из семейства RUPunct. Баланс между производительностью и качеством.
Код инференса:
| [] | [
"TAGS\n#transformers #pytorch #electra #token-classification #ru #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | # Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="Aaryan-Nakhat/experiment-40-intelligent-layer-2-plus-exp-39-data",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 1
# generate_text.model.generation_config.max_new_tokens = 192
# generate_text.model.generation_config.do_sample = True
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.3)
# generate_text.model.generation_config.repetition_penalty = float(1.2)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Aaryan-Nakhat/experiment-40-intelligent-layer-2-plus-exp-39-data",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Aaryan-Nakhat/experiment-40-intelligent-layer-2-plus-exp-39-data",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 1
# generate_text.model.generation_config.max_new_tokens = 192
# generate_text.model.generation_config.do_sample = True
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.3)
# generate_text.model.generation_config.repetition_penalty = float(1.2)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Aaryan-Nakhat/experiment-40-intelligent-layer-2-plus-exp-39-data" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 1
# model.generation_config.max_new_tokens = 192
# model.generation_config.do_sample = True
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.3)
# model.generation_config.repetition_penalty = float(1.2)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=2)
(layers): ModuleList(
(0-31): 32 x MistralDecoderLayer(
(self_attn): MistralSdpaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | {"language": ["en"], "library_name": "transformers", "tags": ["gpt", "llm", "large language model", "h2o-llmstudio"], "inference": false, "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico"} | Aaryan-Nakhat/experiment-40-intelligent-layer-2-plus-exp-39-data | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:17:19+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #autotrain_compatible #text-generation-inference #region-us
| # Model Card
## Summary
This model was trained using H2O LLM Studio.
- Base model: HuggingFaceH4/zephyr-7b-beta
## Usage
To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running
- Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
Alternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'.
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
## Quantization and sharding
You can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .
## Model Architecture
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | [
"# Model Card",
"## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: HuggingFaceH4/zephyr-7b-beta",
"## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n - Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n \n - Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'.\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:",
"## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .",
"## Model Architecture",
"## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.",
"## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #autotrain_compatible #text-generation-inference #region-us \n",
"# Model Card",
"## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: HuggingFaceH4/zephyr-7b-beta",
"## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n - Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n \n - Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'.\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:",
"## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .",
"## Model Architecture",
"## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.",
"## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | rachid16/llama3-8b-RAG-News-Finance | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:17:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/nfpl2g8 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:17:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# ChimeraLlama-3-8B-v3
ChimeraLlama-3-8B-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B)
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO)
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
* [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3)
* [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3)
## 🧩 Configuration
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.6
weight: 0.5
- model: mlabonne/OrpoLlama-3-8B
parameters:
density: 0.55
weight: 0.05
- model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.55
weight: 0.05
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO
parameters:
density: 0.55
weight: 0.2
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.55
weight: 0.1
- model: vicgalle/Configurable-Llama-3-8B-v0.3
parameters:
density: 0.55
weight: 0.05
- model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/ChimeraLlama-3-8B-v3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "other", "tags": ["merge", "mergekit", "lazymergekit"], "base_model": ["NousResearch/Meta-Llama-3-8B-Instruct", "mlabonne/OrpoLlama-3-8B", "cognitivecomputations/dolphin-2.9-llama3-8b", "Danielbrdz/Barcenas-Llama3-8b-ORPO", "VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "vicgalle/Configurable-Llama-3-8B-v0.3", "MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3"]} | mlabonne/ChimeraLlama-3-8B-v3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:mlabonne/OrpoLlama-3-8B",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"base_model:Danielbrdz/Barcenas-Llama3-8b-ORPO",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:vicgalle/Configurable-Llama-3-8B-v0.3",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:17:47+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #base_model-NousResearch/Meta-Llama-3-8B-Instruct #base_model-mlabonne/OrpoLlama-3-8B #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #base_model-Danielbrdz/Barcenas-Llama3-8b-ORPO #base_model-VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct #base_model-vicgalle/Configurable-Llama-3-8B-v0.3 #base_model-MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ChimeraLlama-3-8B-v3
ChimeraLlama-3-8B-v3 is a merge of the following models using LazyMergekit:
* NousResearch/Meta-Llama-3-8B-Instruct
* mlabonne/OrpoLlama-3-8B
* cognitivecomputations/dolphin-2.9-llama3-8b
* Danielbrdz/Barcenas-Llama3-8b-ORPO
* VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
* vicgalle/Configurable-Llama-3-8B-v0.3
* MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3
## Configuration
## Usage
| [
"# ChimeraLlama-3-8B-v3\n\nChimeraLlama-3-8B-v3 is a merge of the following models using LazyMergekit:\n* NousResearch/Meta-Llama-3-8B-Instruct\n* mlabonne/OrpoLlama-3-8B\n* cognitivecomputations/dolphin-2.9-llama3-8b\n* Danielbrdz/Barcenas-Llama3-8b-ORPO\n* VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct\n* vicgalle/Configurable-Llama-3-8B-v0.3\n* MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #base_model-NousResearch/Meta-Llama-3-8B-Instruct #base_model-mlabonne/OrpoLlama-3-8B #base_model-cognitivecomputations/dolphin-2.9-llama3-8b #base_model-Danielbrdz/Barcenas-Llama3-8b-ORPO #base_model-VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct #base_model-vicgalle/Configurable-Llama-3-8B-v0.3 #base_model-MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ChimeraLlama-3-8B-v3\n\nChimeraLlama-3-8B-v3 is a merge of the following models using LazyMergekit:\n* NousResearch/Meta-Llama-3-8B-Instruct\n* mlabonne/OrpoLlama-3-8B\n* cognitivecomputations/dolphin-2.9-llama3-8b\n* Danielbrdz/Barcenas-Llama3-8b-ORPO\n* VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct\n* vicgalle/Configurable-Llama-3-8B-v0.3\n* MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3",
"## Configuration",
"## Usage"
] |
null | null |
# rinna-llama-3-youko-8b-gguf
[rinnaさんが公開しているllama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
モデル一覧
GGUF版
[mmnga/rinna-llama-3-youko-8b-gguf](https://huggingface.co/mmnga/rinna-llama-3-youko-8b-gguf)
[mmnga/rinna-nekomata-7b-instruction-gguf](https://huggingface.co/mmnga/rinna-nekomata-7b-instruction-gguf)
[mmnga/rinna-nekomata-14b-instruction-gguf](https://huggingface.co/mmnga/rinna-nekomata-14b-instruction-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'rinna-llama-3-youko-8b-q4_0.gguf' -n 128 -p '西田幾多郎は、'
```
| {"language": ["en", "ja"], "license": "llama3", "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"]} | mmnga/rinna-llama-3-youko-8b-gguf | null | [
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:llama3",
"region:us"
] | null | 2024-05-01T14:17:53+00:00 | [] | [
"en",
"ja"
] | TAGS
#gguf #en #ja #dataset-TFMC/imatrix-dataset-for-japanese-llm #license-llama3 #region-us
|
# rinna-llama-3-youko-8b-gguf
rinnaさんが公開しているllama-3-youko-8bのggufフォーマット変換版です。
imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。
モデル一覧
GGUF版
mmnga/rinna-llama-3-youko-8b-gguf
mmnga/rinna-nekomata-7b-instruction-gguf
mmnga/rinna-nekomata-14b-instruction-gguf
## Usage
| [
"# rinna-llama-3-youko-8b-gguf\nrinnaさんが公開しているllama-3-youko-8bのggufフォーマット変換版です。 \n\nimatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。 \n\nモデル一覧 \n\nGGUF版 \nmmnga/rinna-llama-3-youko-8b-gguf \nmmnga/rinna-nekomata-7b-instruction-gguf \nmmnga/rinna-nekomata-14b-instruction-gguf",
"## Usage"
] | [
"TAGS\n#gguf #en #ja #dataset-TFMC/imatrix-dataset-for-japanese-llm #license-llama3 #region-us \n",
"# rinna-llama-3-youko-8b-gguf\nrinnaさんが公開しているllama-3-youko-8bのggufフォーマット変換版です。 \n\nimatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。 \n\nモデル一覧 \n\nGGUF版 \nmmnga/rinna-llama-3-youko-8b-gguf \nmmnga/rinna-nekomata-7b-instruction-gguf \nmmnga/rinna-nekomata-14b-instruction-gguf",
"## Usage"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lzacchini/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]} | lzacchini/ppo-Pyramids | null | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | null | 2024-05-01T14:18:06+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us
|
# ppo Agent playing Pyramids
This is a trained model of a ppo agent playing Pyramids
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: lzacchini/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: lzacchini/ppo-Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us \n",
"# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: lzacchini/ppo-Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
token-classification | transformers |
RUPunct_big - самая большая модель из семейства RUPunct. Подходит для большинства задач.
Код инференса:
```py
from transformers import pipeline
from transformers import AutoTokenizer
pt = "RUPunct/RUPunct_big"
tk = AutoTokenizer.from_pretrained(pt, strip_accents=False, add_prefix_space=True)
classifier = pipeline("ner", model=pt, tokenizer=tk, aggregation_strategy="first")
def process_token(token, label):
if label == "LOWER_O":
return token
if label == "LOWER_PERIOD":
return token + "."
if label == "LOWER_COMMA":
return token + ","
if label == "LOWER_QUESTION":
return token + "?"
if label == "LOWER_TIRE":
return token + "—"
if label == "LOWER_DVOETOCHIE":
return token + ":"
if label == "LOWER_VOSKL":
return token + "!"
if label == "LOWER_PERIODCOMMA":
return token + ";"
if label == "LOWER_DEFIS":
return token + "-"
if label == "LOWER_MNOGOTOCHIE":
return token + "..."
if label == "LOWER_QUESTIONVOSKL":
return token + "?!"
if label == "UPPER_O":
return token.capitalize()
if label == "UPPER_PERIOD":
return token.capitalize() + "."
if label == "UPPER_COMMA":
return token.capitalize() + ","
if label == "UPPER_QUESTION":
return token.capitalize() + "?"
if label == "UPPER_TIRE":
return token.capitalize() + " —"
if label == "UPPER_DVOETOCHIE":
return token.capitalize() + ":"
if label == "UPPER_VOSKL":
return token.capitalize() + "!"
if label == "UPPER_PERIODCOMMA":
return token.capitalize() + ";"
if label == "UPPER_DEFIS":
return token.capitalize() + "-"
if label == "UPPER_MNOGOTOCHIE":
return token.capitalize() + "..."
if label == "UPPER_QUESTIONVOSKL":
return token.capitalize() + "?!"
if label == "UPPER_TOTAL_O":
return token.upper()
if label == "UPPER_TOTAL_PERIOD":
return token.upper() + "."
if label == "UPPER_TOTAL_COMMA":
return token.upper() + ","
if label == "UPPER_TOTAL_QUESTION":
return token.upper() + "?"
if label == "UPPER_TOTAL_TIRE":
return token.upper() + " —"
if label == "UPPER_TOTAL_DVOETOCHIE":
return token.upper() + ":"
if label == "UPPER_TOTAL_VOSKL":
return token.upper() + "!"
if label == "UPPER_TOTAL_PERIODCOMMA":
return token.upper() + ";"
if label == "UPPER_TOTAL_DEFIS":
return token.upper() + "-"
if label == "UPPER_TOTAL_MNOGOTOCHIE":
return token.upper() + "..."
if label == "UPPER_TOTAL_QUESTIONVOSKL":
return token.upper() + "?!"
while 1:
input_text = input(":> ")
preds = classifier(input_text)
output = ""
for item in preds:
output += " " + process_token(item['word'].strip(), item['entity_group'])
print(">>>", output)
``` | {"language": ["ru"], "license": "mit"} | RUPunct/RUPunct_big | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-05-01T14:18:43+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #bert #token-classification #ru #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
RUPunct_big - самая большая модель из семейства RUPunct. Подходит для большинства задач.
Код инференса:
| [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #ru #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-3b-conversational - bnb 4bits
- Model creator: https://huggingface.co/CreitinGameplays/
- Original model: https://huggingface.co/CreitinGameplays/bloom-3b-conversational/
Original model description:
---
license: mit
datasets:
- Xilabs/instructmix
- CreitinGameplays/small-chat-assistant-for-bloom
- sahil2801/CodeAlpaca-20k
language:
- en
tags:
- uncensored
- unrestricted
- code
- biology
- chemistry
- finance
- legal
- music
- art
- climate
- merge
- text-generation-inference
- moe
widget:
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> who was Nikola
Tesla? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about a cat. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> what is an
essay? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> Tell me 5
Brazilian waterfalls to visit. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about how a virus called COVID-19 destroyed the world </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a short
Python program that asks the user for their name and then greets them by
name. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> What can you do? </s> <|assistant|>
inference:
parameters:
temperature: 0.1
do_sample: true
top_k: 50
top_p: 0.10
max_new_tokens: 250
repetition_penalty: 1.155
---
## 🌸 BLOOM 3b Fine-tuned for Chat Assistant
<img src="https://creitingameplays.xyz/img/bloom.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
**Run this model on [Kaggle Notebook](https://www.kaggle.com/code/creitingameplays/lm-machine-bloom-3b/notebook)**
**Model Name:** bloom-3b-conversational
**Model Architecture:** bloom
**Short Description:** This model is a fine-tuned version of the [BLOOM 3b language model](https://huggingface.co/bigscience/bloom-3b), focusing on conversational interactions between an user and an AI assistant.
**Intended Use:** This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:
* Generating responses to user prompts in a chat assistant setting.
* Creating examples of chatbot interactions for further development.
* Studying the capabilities of language models for conversation.
**Limitations:**
* **Fine-tuning Focus:** The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.
* **Potential Biases:** The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.
* **Limited Factual Accuracy:** Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.
* **Primarily English:** While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.
**Specific Input Format:**
The model was fine-tuned using a specific input format that goes like this:
```
<|system|> {system prompt} </s> <|prompter|> {user prompt} </s> <|assistant|> {model response} ```
Using this format when interacting with the model can improve its performance and generate more relevant responses.
**Disclaimer:** This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability.
| {} | RichardErkhov/CreitinGameplays_-_bloom-3b-conversational-4bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:18:57+00:00 | [] | [] | TAGS
#transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
bloom-3b-conversational - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
datasets:
- Xilabs/instructmix
- CreitinGameplays/small-chat-assistant-for-bloom
- sahil2801/CodeAlpaca-20k
language:
- en
tags:
- uncensored
- unrestricted
- code
- biology
- chemistry
- finance
- legal
- music
- art
- climate
- merge
- text-generation-inference
- moe
widget:
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> who was Nikola
Tesla? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about a cat. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> what is an
essay? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> Tell me 5
Brazilian waterfalls to visit. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about how a virus called COVID-19 destroyed the world </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a short
Python program that asks the user for their name and then greets them by
name. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> What can you do? </s> <|assistant|>
inference:
parameters:
temperature: 0.1
do_sample: true
top_k: 50
top_p: 0.10
max_new_tokens: 250
repetition_penalty: 1.155
---
## BLOOM 3b Fine-tuned for Chat Assistant
<img src="URL alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Run this model on Kaggle Notebook
Model Name: bloom-3b-conversational
Model Architecture: bloom
Short Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.
Intended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:
* Generating responses to user prompts in a chat assistant setting.
* Creating examples of chatbot interactions for further development.
* Studying the capabilities of language models for conversation.
Limitations:
* Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.
* Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.
* Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.
* Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.
Specific Input Format:
The model was fine-tuned using a specific input format that goes like this:
Using this format when interacting with the model can improve its performance and generate more relevant responses.
Disclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability.
| [
"## BLOOM 3b Fine-tuned for Chat Assistant\n\n<img src=\"URL alt=\"BigScience Logo\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nRun this model on Kaggle Notebook\n\nModel Name: bloom-3b-conversational\n\nModel Architecture: bloom\n\nShort Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.\n\nIntended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:\n\n* Generating responses to user prompts in a chat assistant setting.\n* Creating examples of chatbot interactions for further development.\n* Studying the capabilities of language models for conversation.\n\nLimitations:\n\n* Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.\n* Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.\n* Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.\n* Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.\n\nSpecific Input Format:\n\nThe model was fine-tuned using a specific input format that goes like this:\n\n\n\nUsing this format when interacting with the model can improve its performance and generate more relevant responses.\n\nDisclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability."
] | [
"TAGS\n#transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## BLOOM 3b Fine-tuned for Chat Assistant\n\n<img src=\"URL alt=\"BigScience Logo\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nRun this model on Kaggle Notebook\n\nModel Name: bloom-3b-conversational\n\nModel Architecture: bloom\n\nShort Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.\n\nIntended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:\n\n* Generating responses to user prompts in a chat assistant setting.\n* Creating examples of chatbot interactions for further development.\n* Studying the capabilities of language models for conversation.\n\nLimitations:\n\n* Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.\n* Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.\n* Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.\n* Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.\n\nSpecific Input Format:\n\nThe model was fine-tuned using a specific input format that goes like this:\n\n\n\nUsing this format when interacting with the model can improve its performance and generate more relevant responses.\n\nDisclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability."
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abbenedek/wav2vec2-tokenizer2 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:18:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Sayan01/Phi-by2-Chat-T2 | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:19:21+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cer
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
- Cer: 0.0718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.646 | 15.38 | 200 | 1.9010 | 0.6387 |
| 0.6207 | 30.77 | 400 | 0.0849 | 0.1757 |
| 0.0527 | 46.15 | 600 | 0.0643 | 0.1386 |
| 0.0325 | 61.54 | 800 | 0.0117 | 0.0888 |
| 0.0156 | 76.92 | 1000 | 0.0101 | 0.1148 |
| 0.0081 | 92.31 | 1200 | 0.0042 | 0.1255 |
| 0.0057 | 107.69 | 1400 | 0.0036 | 0.1284 |
| 0.0058 | 123.08 | 1600 | 0.0066 | 0.0891 |
| 0.0066 | 138.46 | 1800 | 0.0028 | 0.0926 |
| 0.0049 | 153.85 | 2000 | 0.0026 | 0.0391 |
| 0.0044 | 169.23 | 2200 | 0.0020 | 0.0574 |
| 0.0024 | 184.62 | 2400 | 0.0018 | 0.0745 |
| 0.0023 | 200.0 | 2600 | 0.0018 | 0.0718 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "wav2vec2-base-cer", "results": []}]} | abbenedek/wav2vec2-base-cer | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:20:06+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-cer
=================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0018
* Cer: 0.0718
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 150
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.14.5
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 150\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.5\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 150\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.5\n* Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** DuongTrongChi
- **License:** apache-2.0
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"]} | DuongTrongChi/llama-3-dpo-step-915 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:20:50+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: DuongTrongChi
- License: apache-2.0
| [
"# Uploaded model\n\n- Developed by: DuongTrongChi\n- License: apache-2.0"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: DuongTrongChi\n- License: apache-2.0"
] |
text-generation | transformers | # 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE
<a href="https://ibb.co/rs8DhB8"><img src="https://i.ibb.co/8cv1wyv/Smart-Llama-3-Ko-8-B-256k-Po-SE.png" alt="Smart-Llama-3-Ko-8-B-256k-Po-SE" border="0"></a>
SmartLlama-3-Ko-8B-256k-[PoSE](https://huggingface.co/papers/2309.10400) is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation.
## 📕 Merge Details
### Component Models and Contributions
- **NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct**: These models provide a solid foundation for general language understanding and instruction-following capabilities.
- **winglian/llama-3-8b-256k-PoSE**: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.
- **Locutusque/Llama-3-Orca-1.0-8B**: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.
- **abacusai/Llama-3-Smaug-8B**: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.
- **beomi/Llama-3-Open-Ko-8B-Instruct-preview**: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.
## 🖼️ Key Features
- **Extended Context Length**: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.
- **Multilingual Support**: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.
- **Advanced Integration of Models**: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.
## 🎨 Models Merged
The following models were included in the merge:
- **winglian/llama-3-8b-256k-PoSE**: [Extends the context handling capability](https://huggingface.co/winglian/llama-3-8b-256k-PoSE). This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.
- **Locutusque/Llama-3-Orca-1.0-8B**: [Enhances abilities in handling technical content](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B). Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.
- **abacusai/Llama-3-Smaug-8B**: [Improves multi-turn conversational abilities](https://huggingface.co/abacusai/Llama-3-Smaug-8B). Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.
- **beomi/Llama-3-Open-Ko-8B-Instruct-preview**: [Provides enhanced capabilities for Korean language processing](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.
- **NousResearch/Meta-Llama-3-8B-Instruct**: [Offers advanced instruction-following capabilities](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct). It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.
### 🖋️ Merge Method
- **DARE TIES**: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.
### 🗞️ Configuration
The YAML configuration for this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.60
weight: 0.25
- model: winglian/llama-3-8b-256k-PoSE
parameters:
density: 0.60
weight: 0.20
- model: Locutusque/Llama-3-Orca-1.0-8B
parameters:
density: 0.55
weight: 0.15
- model: abacusai/Llama-3-Smaug-8B
parameters:
density: 0.55
weight: 0.15
- model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
parameters:
density: 0.55
weight: 0.30
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
### 🎊 Test Result
**SmartLlama-3-Ko-8B-256k-PoSE Summary Ability**
consideration
Long sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.
<a href="https://ibb.co/sjJJr3f"><img src="https://i.ibb.co/Wnpp1Kh/Screenshot-2024-05-02-at-6-44-30-AM.png" alt="Screenshot-2024-05-02-at-6-44-30-AM" border="0"></a>
<a href="https://ibb.co/D74fzN0"><img src="https://i.ibb.co/8jMgNJ1/Screenshot-2024-05-02-at-6-44-42-AM.png" alt="Screenshot-2024-05-02-at-6-44-42-AM" border="0"></a>
**Source**: [Korea Institute for Industrial Economics and Trade: Macroeconomic Outlook for 2024](https://kocham.org/announcement/%EC%82%B0%EC%97%85%EC%97%B0%EA%B5%AC%EC%9B%90-2024%EB%85%84-%EA%B1%B0%EC%8B%9C%EA%B2%BD%EC%A0%9C-%EC%A0%84%EB%A7%9D).
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["winglian/llama-3-8b-256k-PoSE", "Locutusque/Llama-3-Orca-1.0-8B", "NousResearch/Meta-Llama-3-8B", "abacusai/Llama-3-Smaug-8B", "beomi/Llama-3-Open-Ko-8B-Instruct-preview", "NousResearch/Meta-Llama-3-8B-Instruct"]} | asiansoul/SmartLlama-3-Ko-8B-256k-PoSE | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2309.10400",
"base_model:winglian/llama-3-8b-256k-PoSE",
"base_model:Locutusque/Llama-3-Orca-1.0-8B",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:abacusai/Llama-3-Smaug-8B",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:20:54+00:00 | [
"2309.10400"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2309.10400 #base_model-winglian/llama-3-8b-256k-PoSE #base_model-Locutusque/Llama-3-Orca-1.0-8B #base_model-NousResearch/Meta-Llama-3-8B #base_model-abacusai/Llama-3-Smaug-8B #base_model-beomi/Llama-3-Open-Ko-8B-Instruct-preview #base_model-NousResearch/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE
<a href="URL src="https://i.URL alt="Smart-Llama-3-Ko-8-B-256k-Po-SE" border="0"></a>
SmartLlama-3-Ko-8B-256k-PoSE is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation.
## Merge Details
### Component Models and Contributions
- NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct: These models provide a solid foundation for general language understanding and instruction-following capabilities.
- winglian/llama-3-8b-256k-PoSE: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.
- Locutusque/Llama-3-Orca-1.0-8B: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.
- abacusai/Llama-3-Smaug-8B: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.
- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.
## ️ Key Features
- Extended Context Length: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.
- Multilingual Support: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.
- Advanced Integration of Models: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.
## Models Merged
The following models were included in the merge:
- winglian/llama-3-8b-256k-PoSE: Extends the context handling capability. This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.
- Locutusque/Llama-3-Orca-1.0-8B: Enhances abilities in handling technical content. Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.
- abacusai/Llama-3-Smaug-8B: Improves multi-turn conversational abilities. Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.
- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Provides enhanced capabilities for Korean language processing. This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.
- NousResearch/Meta-Llama-3-8B-Instruct: Offers advanced instruction-following capabilities. It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.
### ️ Merge Method
- DARE TIES: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.
### ️ Configuration
The YAML configuration for this model:
### Test Result
SmartLlama-3-Ko-8B-256k-PoSE Summary Ability
consideration
Long sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.
<a href="URL src="https://i.URL alt="Screenshot-2024-05-02-at-6-44-30-AM" border="0"></a>
<a href="URL src="https://i.URL alt="Screenshot-2024-05-02-at-6-44-42-AM" border="0"></a>
Source: Korea Institute for Industrial Economics and Trade: Macroeconomic Outlook for 2024.
| [
"# 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE\n\n<a href=\"URL src=\"https://i.URL alt=\"Smart-Llama-3-Ko-8-B-256k-Po-SE\" border=\"0\"></a>\n\nSmartLlama-3-Ko-8B-256k-PoSE is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation.",
"## Merge Details",
"### Component Models and Contributions\n- NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct: These models provide a solid foundation for general language understanding and instruction-following capabilities.\n- winglian/llama-3-8b-256k-PoSE: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.\n- Locutusque/Llama-3-Orca-1.0-8B: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.\n- abacusai/Llama-3-Smaug-8B: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.",
"## ️ Key Features\n\n- Extended Context Length: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.\n \n- Multilingual Support: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.\n \n- Advanced Integration of Models: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.",
"## Models Merged\n\nThe following models were included in the merge:\n- winglian/llama-3-8b-256k-PoSE: Extends the context handling capability. This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.\n- Locutusque/Llama-3-Orca-1.0-8B: Enhances abilities in handling technical content. Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.\n- abacusai/Llama-3-Smaug-8B: Improves multi-turn conversational abilities. Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Provides enhanced capabilities for Korean language processing. This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.\n- NousResearch/Meta-Llama-3-8B-Instruct: Offers advanced instruction-following capabilities. It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.",
"### ️ Merge Method\n- DARE TIES: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.",
"### ️ Configuration\nThe YAML configuration for this model:",
"### Test Result\n\nSmartLlama-3-Ko-8B-256k-PoSE Summary Ability\n\nconsideration\n\nLong sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.\n\n<a href=\"URL src=\"https://i.URL alt=\"Screenshot-2024-05-02-at-6-44-30-AM\" border=\"0\"></a>\n<a href=\"URL src=\"https://i.URL alt=\"Screenshot-2024-05-02-at-6-44-42-AM\" border=\"0\"></a>\nSource: Korea Institute for Industrial Economics and Trade: Macroeconomic Outlook for 2024."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2309.10400 #base_model-winglian/llama-3-8b-256k-PoSE #base_model-Locutusque/Llama-3-Orca-1.0-8B #base_model-NousResearch/Meta-Llama-3-8B #base_model-abacusai/Llama-3-Smaug-8B #base_model-beomi/Llama-3-Open-Ko-8B-Instruct-preview #base_model-NousResearch/Meta-Llama-3-8B-Instruct #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE\n\n<a href=\"URL src=\"https://i.URL alt=\"Smart-Llama-3-Ko-8-B-256k-Po-SE\" border=\"0\"></a>\n\nSmartLlama-3-Ko-8B-256k-PoSE is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation.",
"## Merge Details",
"### Component Models and Contributions\n- NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct: These models provide a solid foundation for general language understanding and instruction-following capabilities.\n- winglian/llama-3-8b-256k-PoSE: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.\n- Locutusque/Llama-3-Orca-1.0-8B: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.\n- abacusai/Llama-3-Smaug-8B: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.",
"## ️ Key Features\n\n- Extended Context Length: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.\n \n- Multilingual Support: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.\n \n- Advanced Integration of Models: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.",
"## Models Merged\n\nThe following models were included in the merge:\n- winglian/llama-3-8b-256k-PoSE: Extends the context handling capability. This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.\n- Locutusque/Llama-3-Orca-1.0-8B: Enhances abilities in handling technical content. Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.\n- abacusai/Llama-3-Smaug-8B: Improves multi-turn conversational abilities. Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Provides enhanced capabilities for Korean language processing. This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.\n- NousResearch/Meta-Llama-3-8B-Instruct: Offers advanced instruction-following capabilities. It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.",
"### ️ Merge Method\n- DARE TIES: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.",
"### ️ Configuration\nThe YAML configuration for this model:",
"### Test Result\n\nSmartLlama-3-Ko-8B-256k-PoSE Summary Ability\n\nconsideration\n\nLong sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.\n\n<a href=\"URL src=\"https://i.URL alt=\"Screenshot-2024-05-02-at-6-44-30-AM\" border=\"0\"></a>\n<a href=\"URL src=\"https://i.URL alt=\"Screenshot-2024-05-02-at-6-44-42-AM\" border=\"0\"></a>\nSource: Korea Institute for Industrial Economics and Trade: Macroeconomic Outlook for 2024."
] |
null | transformers | # 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE
<a href="https://ibb.co/rs8DhB8"><img src="https://i.ibb.co/8cv1wyv/Smart-Llama-3-Ko-8-B-256k-Po-SE.png" alt="Smart-Llama-3-Ko-8-B-256k-Po-SE" border="0"></a>
SmartLlama-3-Ko-8B-256k-[PoSE](https://huggingface.co/papers/2309.10400) is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation.
## 📕 Merge Details
### Component Models and Contributions
- **NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct**: These models provide a solid foundation for general language understanding and instruction-following capabilities.
- **winglian/llama-3-8b-256k-PoSE**: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.
- **Locutusque/Llama-3-Orca-1.0-8B**: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.
- **abacusai/Llama-3-Smaug-8B**: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.
- **beomi/Llama-3-Open-Ko-8B-Instruct-preview**: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.
## 🖼️ Key Features
- **Extended Context Length**: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.
- **Multilingual Support**: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.
- **Advanced Integration of Models**: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.
## 🎨 Models Merged
The following models were included in the merge:
- **winglian/llama-3-8b-256k-PoSE**: [Extends the context handling capability](https://huggingface.co/winglian/llama-3-8b-256k-PoSE). This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.
- **Locutusque/Llama-3-Orca-1.0-8B**: [Enhances abilities in handling technical content](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B). Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.
- **abacusai/Llama-3-Smaug-8B**: [Improves multi-turn conversational abilities](https://huggingface.co/abacusai/Llama-3-Smaug-8B). Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.
- **beomi/Llama-3-Open-Ko-8B-Instruct-preview**: [Provides enhanced capabilities for Korean language processing](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.
- **NousResearch/Meta-Llama-3-8B-Instruct**: [Offers advanced instruction-following capabilities](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct). It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.
### 🖋️ Merge Method
- **DARE TIES**: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.
## 💻 Ollama
```
ollama create smartllama-3-Ko-8b-256k-pose -f ./Modelfile_Q5_K_M
```
[Modelfile_Q5_K_M]
```
FROM smartllama-3-ko-8b-256k-pose-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
<s>{{ .System }}</s>
{{- end }}
user
<s>Human:
{{ .Prompt }}</s>
assistant
<s>Assistant:
"""
SYSTEM """
친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 길이에 상관없이 모든 대답은 한국어(Korean)으로 대답해줘.
"""
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 256000
PARAMETER stop "<s>"
PARAMETER stop "</s>"
```
## 💻 Ollama Python Summarizing Normal Test Code
install all of these libraries
```
pip install requests beautifulsoup4 PyPDF2 langchain-community langchain
```
pose_test.py
```
import sys
import os
import requests
from bs4 import BeautifulSoup
import PyPDF2
from langchain_community.chat_models import ChatOllama
from langchain.schema import AIMessage, HumanMessage, SystemMessage
def clean_output(text):
text = text.replace("</s>", "").strip()
return text
def invoke_model(text):
messages = [
SystemMessage(content='You are an expert copywriter with expertise in summarizing documents.'),
HumanMessage(content=f'Please provide a short and concise summary of the following text:\nTEXT: {text}')
]
try:
llm = ChatOllama(model="pose:latest")
summary_output = llm.invoke(messages)
if isinstance(summary_output, AIMessage):
cleaned_content = clean_output(summary_output.content)
return cleaned_content
else:
return "Unexpected data type for model output."
except Exception as e:
print(f"An error occurred while processing the model output: {str(e)}")
return None
def fetch_text_from_url(url):
try:
response = requests.get(url)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
content = soup.find('div', {'id': 'bodyContent'})
paragraphs = content.find_all('p')
text_content = ' '.join(p.text for p in paragraphs)
return text_content
except requests.RequestException as e:
print(f"Failed to fetch data from URL: {str(e)}")
return None
def read_text_file(file_path):
with open(file_path, "r", encoding="utf-8") as file:
return file.read()
def read_pdf(file_path):
with open(file_path, "rb") as file:
reader = PyPDF2.PdfReader(file)
text_content = ""
for page in reader.pages:
extracted_text = page.extract_text()
if extracted_text:
text_content += extracted_text + "\n"
return text_content
def summarize_content(source):
if source.startswith(('http://', 'https://')):
text_content = fetch_text_from_url(source)
else:
_, file_extension = os.path.splitext(source)
if file_extension.lower() == '.pdf':
text_content = read_pdf(source)
elif file_extension.lower() in ['.txt', '.text']:
text_content = read_text_file(source)
else:
print("Unsupported file type")
return
if text_content:
summary = invoke_model(text_content)
print("Summary of the document:")
print(summary)
else:
print("No text found or unable to extract text from source.")
if __name__ == '__main__':
if len(sys.argv) < 2:
print("Usage: python script.py <file_path_or_url>")
else:
source = sys.argv[1]
summarize_content(source)
```
run txt file (assume txt is a.txt)
```
python pose_test.py a.txt
```
run url (assume txt is url)
```
python pose_test.py url
```
You can find both test results below on the section : Test Result1
## 💻 Ollama Python Summarizing Test Code for the target lang response
install all of these libraries
```
pip install requests beautifulsoup4 PyPDF2 googletrans==4.0.0-rc1 langchain-community langchain aiohttp asyncio aiofiles
```
pose_lang.py
```
import sys
import os
import aiohttp
import PyPDF2
from bs4 import BeautifulSoup
from langchain_community.chat_models import ChatOllama
from langchain.schema import AIMessage, HumanMessage, SystemMessage
from googletrans import Translator
import logging
import asyncio
import aiofiles
# Setup logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
def clean_output(text):
"""Cleans the model output text."""
text = text.replace("</s>", "").strip() # Specific cleaning operation
return text
def translate_text(text, src_lang, dest_lang):
"""Translates text from source language to destination language using Google Translate."""
if src_lang == dest_lang:
return text
translator = Translator()
try:
translation = translator.translate(text, src=src_lang, dest=dest_lang)
return translation.text
except Exception as e:
logging.error(f"Translation failed: {e}")
return text
def detect_language(text):
"""Detects the language of the given text."""
translator = Translator()
try:
detected = translator.detect(text)
return detected.lang
except Exception as e:
logging.error(f"Language detection failed: {e}")
return None
async def invoke_model(text, target_lang):
"""Asynchronously invokes the chat model and processes the response with language-specific instructions."""
llm = ChatOllama(model="pose:latest")
try:
# Define messages based on target language
if target_lang == 'ko':
messages = [
SystemMessage(content='문서의 핵심 요약을 상세하게 제공해 주실 전문가로서, 다음 문서를 요약해 주세요.'),
HumanMessage(content=f'다음 텍스트에 대한 전문적 요약을 제공해 주세요. 요약은 한국어의 언어적 뉘앙스에 맞게 최고 수준의 명확성과 세부 사항을 준수해야 합니다:\n\nTEXT: {text}')
]
else: # default to English if not Korean
messages = [
SystemMessage(content='As an adept summarizer, your expertise is required to condense the following document into its essential points in detail.'),
HumanMessage(content=f'Kindly provide an expert summary of the text below, adhering to the highest standards of clarity and detail. Ensure the response is tailored to the linguistic nuances of English:\n\nTEXT: {text}')
]
# Since invoke is not awaitable, run it in a thread if it's blocking
response = await asyncio.to_thread(llm.invoke, messages)
if isinstance(response, AIMessage):
cleaned_content = clean_output(response.content)
content_lang = detect_language(cleaned_content)
print(f"Current content language: {content_lang}, Target language to be translated to: {target_lang}")
if content_lang != target_lang:
return translate_text(cleaned_content, content_lang, target_lang)
return cleaned_content
else:
raise ValueError("Model did not return an AIMessage")
except Exception as e:
logging.error(f"Error during model invocation: {e}")
return "Model invocation failed."
async def fetch_text_from_url(url):
"""Asynchronously fetches and extracts text content from a given URL."""
async with aiohttp.ClientSession() as session:
try:
async with session.get(url) as response:
content = await response.text()
soup = BeautifulSoup(content, 'html.parser')
main_content = soup.select_one('#mw-content-text, #bodyContent, .content')
if not main_content:
logging.error("No content found in the expected sections.")
return None
text_content = ' '.join(p.get_text() for p in main_content.find_all(['p', 'li'], string=True))
return text_content
except Exception as e:
logging.error(f"Error fetching URL content: {e}")
return None
async def read_text_file(file_path):
"""Asynchronously reads text from a text file."""
async with aiofiles.open(file_path, mode='r', encoding='utf-8') as file:
text_content = await file.read()
return text_content
async def read_pdf(file_path):
"""Asynchronously reads text from a PDF file."""
def sync_read_pdf(path):
try:
with open(path, "rb") as file:
reader = PyPDF2.PdfReader(file)
return ' '.join(page.extract_text() for page in reader.pages if page.extract_text())
except Exception as e:
logging.error(f"Error reading PDF file: {e}")
return None
return await asyncio.to_thread(sync_read_pdf, file_path)
async def summarize_content(source, language):
"""Processes input source (URL, file, text) and outputs a summary in the specified language asynchronously."""
print("Processing input...")
text_content = None
if source.startswith(('http://', 'https://')):
print("Fetching content from URL...")
text_content = await fetch_text_from_url(source)
elif os.path.isfile(source):
_, file_extension = os.path.splitext(source)
if file_extension.lower() == '.pdf':
print("Reading PDF...")
text_content = await read_pdf(source)
elif file_extension.lower() in ['.txt', '.text']:
print("Reading text file...")
text_content = await read_text_file(source)
else:
print("Unsupported file type")
return
else:
print("Unsupported file type")
return
if text_content:
print("Summarizing content...")
summary = await invoke_model(text_content, language)
print("\n--- Summary of the document ---\n")
print(summary)
else:
print("No text found or unable to extract text from source.")
if __name__ == '__main__':
if len(sys.argv) < 3:
print("Usage: python script.py <file_path_or_url_or_text> <language>")
print("Language should be 'ko' for Korean or 'en' for English.")
else:
source = sys.argv[1]
language = sys.argv[2]
asyncio.run(summarize_content(source, language))
```
run txt file (assume txt is a.txt)
```
Korean response : python pose_lang a.txt ko
English response : python pose_lang a.txt en
```
run pdf file (assume pdf is a.pdf)
```
Korean response : python pose_lang a.pdf ko
English response : python pose_lang a.pdf en
```
run url (assume url is wikepedia)
```
Korean response : python pose_lang url ko
English response : python pose_lang url en
```
I added additional Google Translator here. If you request an answer in Korean and the answer is in English sometimes for the lang hallucination, this function detects it and answers you in Korean.
Conversely, if you request a response in English and the response is in Korean for the lang hallucination, this function detects it and responds in English.
You can find both test results below on the section : Test Result2 for target lang response
### 🗞️ Configuration
The YAML configuration for this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.60
weight: 0.25
- model: winglian/llama-3-8b-256k-PoSE
parameters:
density: 0.60
weight: 0.20
- model: Locutusque/Llama-3-Orca-1.0-8B
parameters:
density: 0.55
weight: 0.15
- model: abacusai/Llama-3-Smaug-8B
parameters:
density: 0.55
weight: 0.15
- model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
parameters:
density: 0.55
weight: 0.30
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
Test OS Condition
```
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro18,2
Chip: Apple M1 Max
Total Number of Cores: 10 (8 performance and 2 efficiency)
Memory: 64 GB
System Firmware Version: 10151.101.3
OS Loader Version: 10151.101.3
```
### 🎊 Test Result1 (Normal)
**SmartLlama-3-Ko-8B-256k-PoSE Summary Ability**
consideration
Long sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.
## Summary of Britney Spears on Wikipedia
[](https://ibb.co/7zxxL9M)
## Summary of Steve Jobs Text File
[](https://ibb.co/9pkyxbS)
## Summary of Jay Park on Wikipedia
[](https://ibb.co/g9gY3Vh)
### 🎊 Test Result2 (Target Language Summary Return)
**SmartLlama-3-Ko-8B-256k-PoSE Summary Ability**
consideration
I added additional Google Translator here. If you request an answer in Korean and the answer is in English, this function detects it and answers you in Korean.
Conversely, if you request a response in English and the response is in Korean, this function detects it and responds in English.
If you don't get a clear answer, try running it several times.
## Summary of economy pdf
```
python final2.py economy.pdf ko
# if you want english summary, en
```
[](https://ibb.co/JKgCDYt)
## Summary of Steve Jobs Text File
```
python final2.py steve.txt ko
# if you want english summary, en
```
[](https://ibb.co/PY6hH8d)
## Summary of Jay Park on Wikipedia
```
python final2.py https://en.wikipedia.org/wiki/Jay_Park ko
# if you want english summary, en
```
[](https://ibb.co/j6CPyW0)
**Test Source From**
[박재범 - wikipedia - EN](https://en.wikipedia.org/wiki/Jay_Park)
[박재범 - wikipedia - KR](https://ko.wikipedia.org/wiki/%EB%B0%95%EC%9E%AC%EB%B2%94)
[Britney Spears - wikipedia - EN](https://en.wikipedia.org/wiki/Britney_Spears)
[한국은행 경제전망 보고서 - KR](https://www.bok.or.kr/viewer/skin/doc.html?fn=202402290251197820.pdf&rs=/webview/result/P0002359/202402)
[Community member : Mr Han' steve jobs txt file]
### ⛑️ Test Issue
2024-05-02
```
If you use load_summarize_chain(), there will be repetition. -> community member Mr.Han issue
Is it a merge issue? He thinks the merge target may be the issue.
chain = load_summarize_chain(
llm,
chain_type='stuff',
prompt=prompt,
verbose=False
)
output_summary = chain.invoke(docs)
-> investigating for me how to solve.....
```
```
Mr.Han is investgating the symptoms
Your OS is using REDHAT. Even if I run the code using the LLAMA3 model provided by ollama, there is an error.
I wonder if I should wait a little longer for Red Hat...
<|eot_id|><|start_header_id|>assistant<|end_header_id|>, ... omitted
Ha ha, thanks for the chat! You too have a great day and happy summarizing if you need it again soon!<|eot_id|><|start_header_id|>assistant<|end_header_id|>
It's not a merge problem... I think it's a fundamental problem that doesn't fit the OS environment... so I'm sharing it with you. Is there anyone who has the same problem as me in redhat?
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["winglian/llama-3-8b-256k-PoSE", "Locutusque/Llama-3-Orca-1.0-8B", "NousResearch/Meta-Llama-3-8B", "abacusai/Llama-3-Smaug-8B", "beomi/Llama-3-Open-Ko-8B-Instruct-preview", "NousResearch/Meta-Llama-3-8B-Instruct"]} | asiansoul/SmartLlama-3-Ko-8B-256k-PoSE-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2309.10400",
"base_model:winglian/llama-3-8b-256k-PoSE",
"base_model:Locutusque/Llama-3-Orca-1.0-8B",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:abacusai/Llama-3-Smaug-8B",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:22:53+00:00 | [
"2309.10400"
] | [] | TAGS
#transformers #gguf #mergekit #merge #arxiv-2309.10400 #base_model-winglian/llama-3-8b-256k-PoSE #base_model-Locutusque/Llama-3-Orca-1.0-8B #base_model-NousResearch/Meta-Llama-3-8B #base_model-abacusai/Llama-3-Smaug-8B #base_model-beomi/Llama-3-Open-Ko-8B-Instruct-preview #base_model-NousResearch/Meta-Llama-3-8B-Instruct #endpoints_compatible #region-us
| # 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE
<a href="URL src="https://i.URL alt="Smart-Llama-3-Ko-8-B-256k-Po-SE" border="0"></a>
SmartLlama-3-Ko-8B-256k-PoSE is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation.
## Merge Details
### Component Models and Contributions
- NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct: These models provide a solid foundation for general language understanding and instruction-following capabilities.
- winglian/llama-3-8b-256k-PoSE: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.
- Locutusque/Llama-3-Orca-1.0-8B: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.
- abacusai/Llama-3-Smaug-8B: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.
- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.
## ️ Key Features
- Extended Context Length: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.
- Multilingual Support: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.
- Advanced Integration of Models: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.
## Models Merged
The following models were included in the merge:
- winglian/llama-3-8b-256k-PoSE: Extends the context handling capability. This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.
- Locutusque/Llama-3-Orca-1.0-8B: Enhances abilities in handling technical content. Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.
- abacusai/Llama-3-Smaug-8B: Improves multi-turn conversational abilities. Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.
- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Provides enhanced capabilities for Korean language processing. This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.
- NousResearch/Meta-Llama-3-8B-Instruct: Offers advanced instruction-following capabilities. It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.
### ️ Merge Method
- DARE TIES: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.
## Ollama
[Modelfile_Q5_K_M]
## Ollama Python Summarizing Normal Test Code
install all of these libraries
pose_test.py
run txt file (assume txt is a.txt)
run url (assume txt is url)
You can find both test results below on the section : Test Result1
## Ollama Python Summarizing Test Code for the target lang response
install all of these libraries
pose_lang.py
run txt file (assume txt is a.txt)
run pdf file (assume pdf is a.pdf)
run url (assume url is wikepedia)
I added additional Google Translator here. If you request an answer in Korean and the answer is in English sometimes for the lang hallucination, this function detects it and answers you in Korean.
Conversely, if you request a response in English and the response is in Korean for the lang hallucination, this function detects it and responds in English.
You can find both test results below on the section : Test Result2 for target lang response
### ️ Configuration
The YAML configuration for this model:
Test OS Condition
### Test Result1 (Normal)
SmartLlama-3-Ko-8B-256k-PoSE Summary Ability
consideration
Long sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.
## Summary of Britney Spears on Wikipedia

SmartLlama-3-Ko-8B-256k-PoSE Summary Ability
consideration
I added additional Google Translator here. If you request an answer in Korean and the answer is in English, this function detects it and answers you in Korean.
Conversely, if you request a response in English and the response is in Korean, this function detects it and responds in English.
If you don't get a clear answer, try running it several times.
## Summary of economy pdf
 to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.\n- Locutusque/Llama-3-Orca-1.0-8B: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.\n- abacusai/Llama-3-Smaug-8B: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.",
"## ️ Key Features\n\n- Extended Context Length: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.\n \n- Multilingual Support: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.\n \n- Advanced Integration of Models: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.",
"## Models Merged\n\nThe following models were included in the merge:\n- winglian/llama-3-8b-256k-PoSE: Extends the context handling capability. This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.\n- Locutusque/Llama-3-Orca-1.0-8B: Enhances abilities in handling technical content. Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.\n- abacusai/Llama-3-Smaug-8B: Improves multi-turn conversational abilities. Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Provides enhanced capabilities for Korean language processing. This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.\n- NousResearch/Meta-Llama-3-8B-Instruct: Offers advanced instruction-following capabilities. It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.",
"### ️ Merge Method\n- DARE TIES: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.",
"## Ollama\n\n\n\n[Modelfile_Q5_K_M]",
"## Ollama Python Summarizing Normal Test Code\n\ninstall all of these libraries\n\n\npose_test.py\n\n\nrun txt file (assume txt is a.txt)\n\n\nrun url (assume txt is url)\n\n\nYou can find both test results below on the section : Test Result1",
"## Ollama Python Summarizing Test Code for the target lang response\n\ninstall all of these libraries\n\n\npose_lang.py\n\n\nrun txt file (assume txt is a.txt)\n\n\nrun pdf file (assume pdf is a.pdf)\n\n\nrun url (assume url is wikepedia)\n\nI added additional Google Translator here. If you request an answer in Korean and the answer is in English sometimes for the lang hallucination, this function detects it and answers you in Korean.\nConversely, if you request a response in English and the response is in Korean for the lang hallucination, this function detects it and responds in English.\n\nYou can find both test results below on the section : Test Result2 for target lang response",
"### ️ Configuration\nThe YAML configuration for this model:\n\n\n\nTest OS Condition",
"### Test Result1 (Normal)\n\nSmartLlama-3-Ko-8B-256k-PoSE Summary Ability\n\nconsideration\n\nLong sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.",
"## Summary of Britney Spears on Wikipedia\n\n\n\nSmartLlama-3-Ko-8B-256k-PoSE Summary Ability\n\nconsideration\n\nI added additional Google Translator here. If you request an answer in Korean and the answer is in English, this function detects it and answers you in Korean.\nConversely, if you request a response in English and the response is in Korean, this function detects it and responds in English.\n\nIf you don't get a clear answer, try running it several times.",
"## Summary of economy pdf\n\n\n\n to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory.\n- Locutusque/Llama-3-Orca-1.0-8B: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs.\n- abacusai/Llama-3-Smaug-8B: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences.",
"## ️ Key Features\n\n- Extended Context Length: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications.\n \n- Multilingual Support: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications.\n \n- Advanced Integration of Models: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision.",
"## Models Merged\n\nThe following models were included in the merge:\n- winglian/llama-3-8b-256k-PoSE: Extends the context handling capability. This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens.\n- Locutusque/Llama-3-Orca-1.0-8B: Enhances abilities in handling technical content. Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language.\n- abacusai/Llama-3-Smaug-8B: Improves multi-turn conversational abilities. Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning.\n- beomi/Llama-3-Open-Ko-8B-Instruct-preview: Provides enhanced capabilities for Korean language processing. This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users.\n- NousResearch/Meta-Llama-3-8B-Instruct: Offers advanced instruction-following capabilities. It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands.",
"### ️ Merge Method\n- DARE TIES: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon.",
"## Ollama\n\n\n\n[Modelfile_Q5_K_M]",
"## Ollama Python Summarizing Normal Test Code\n\ninstall all of these libraries\n\n\npose_test.py\n\n\nrun txt file (assume txt is a.txt)\n\n\nrun url (assume txt is url)\n\n\nYou can find both test results below on the section : Test Result1",
"## Ollama Python Summarizing Test Code for the target lang response\n\ninstall all of these libraries\n\n\npose_lang.py\n\n\nrun txt file (assume txt is a.txt)\n\n\nrun pdf file (assume pdf is a.pdf)\n\n\nrun url (assume url is wikepedia)\n\nI added additional Google Translator here. If you request an answer in Korean and the answer is in English sometimes for the lang hallucination, this function detects it and answers you in Korean.\nConversely, if you request a response in English and the response is in Korean for the lang hallucination, this function detects it and responds in English.\n\nYou can find both test results below on the section : Test Result2 for target lang response",
"### ️ Configuration\nThe YAML configuration for this model:\n\n\n\nTest OS Condition",
"### Test Result1 (Normal)\n\nSmartLlama-3-Ko-8B-256k-PoSE Summary Ability\n\nconsideration\n\nLong sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean.",
"## Summary of Britney Spears on Wikipedia\n\n\n\nSmartLlama-3-Ko-8B-256k-PoSE Summary Ability\n\nconsideration\n\nI added additional Google Translator here. If you request an answer in Korean and the answer is in English, this function detects it and answers you in Korean.\nConversely, if you request a response in English and the response is in Korean, this function detects it and responds in English.\n\nIf you don't get a clear answer, try running it several times.",
"## Summary of economy pdf\n\n\n\n and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | Cognitus-Stuti/model | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:24:29+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Cognitus-Stuti
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Cognitus-Stuti\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Cognitus-Stuti\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_model
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2888 | 0.02 | 4 | 2.0978 |
### Framework versions
- PEFT 0.4.0
- Transformers 4.37.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "best_model", "results": []}]} | hussamsal/best_model | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:24:54+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| best\_model
===========
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0978
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
The following 'bitsandbytes' quantization config was used during training:
* quant\_method: bitsandbytes
* load\_in\_8bit: False
* load\_in\_4bit: True
* llm\_int8\_threshold: 6.0
* llm\_int8\_skip\_modules: None
* llm\_int8\_enable\_fp32\_cpu\_offload: False
* llm\_int8\_has\_fp16\_weight: False
* bnb\_4bit\_quant\_type: nf4
* bnb\_4bit\_use\_double\_quant: False
* bnb\_4bit\_compute\_dtype: float16
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.03
* training\_steps: 4
### Training results
### Framework versions
* PEFT 0.4.0
* Transformers 4.37.2
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 4",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.4.0\n* Transformers 4.37.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* training\\_steps: 4",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.4.0\n* Transformers 4.37.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs32_lr6 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 6318 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.603048 | 15.109937 |
| 0.5 | 8.715844 | 8.071290 |
| 1.0 | 7.608879 | 8.114126 |
| 1.5 | 7.407612 | 7.914163 |
| 2.0 | 7.323461 | 7.774658 |
| 2.5 | 7.248362 | 7.696718 |
| 3.0 | 7.101276 | 7.856242 |
| 3.5 | 7.134161 | 7.617901 |
| 4.0 | 7.105548 | 7.837306 |
| 4.5 | 7.221799 | 7.653854 |
| 5.0 | 7.047156 | 7.659136 |
| 5.5 | 7.080983 | 7.554190 |
| 6.0 | 7.083629 | 7.670907 |
| 6.5 | 7.180606 | 7.623875 |
| 7.0 | 7.036574 | 7.571451 |
| 7.5 | 7.037596 | 7.550659 |
| 8.0 | 7.082738 | 7.634689 |
| 8.5 | 7.136363 | 7.576325 |
| 9.0 | 7.046428 | 7.594891 |
| 9.5 | 7.022868 | 7.588534 |
| 10.0 | 7.075124 | 7.532026 |
| 10.5 | 7.078401 | 7.519065 |
| 11.0 | 7.109886 | 7.550544 |
| {"language": "en", "tags": ["fill-mask"]} | damgomz/BERTrand_bs32_lr6 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:25:22+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us
| Environmental Impact (CODE CARBON DEFAULT)
------------------------------------------
Environmental Impact (for one core)
-----------------------------------
Note
----
30 April 2024
My Config
---------
Training and Testing steps
--------------------------
Epoch: 0.0, Train Loss: 15.603048, Test Loss: 15.109937
Epoch: 0.5, Train Loss: 8.715844, Test Loss: 8.071290
Epoch: 1.0, Train Loss: 7.608879, Test Loss: 8.114126
Epoch: 1.5, Train Loss: 7.407612, Test Loss: 7.914163
Epoch: 2.0, Train Loss: 7.323461, Test Loss: 7.774658
Epoch: 2.5, Train Loss: 7.248362, Test Loss: 7.696718
Epoch: 3.0, Train Loss: 7.101276, Test Loss: 7.856242
Epoch: 3.5, Train Loss: 7.134161, Test Loss: 7.617901
Epoch: 4.0, Train Loss: 7.105548, Test Loss: 7.837306
Epoch: 4.5, Train Loss: 7.221799, Test Loss: 7.653854
Epoch: 5.0, Train Loss: 7.047156, Test Loss: 7.659136
Epoch: 5.5, Train Loss: 7.080983, Test Loss: 7.554190
Epoch: 6.0, Train Loss: 7.083629, Test Loss: 7.670907
Epoch: 6.5, Train Loss: 7.180606, Test Loss: 7.623875
Epoch: 7.0, Train Loss: 7.036574, Test Loss: 7.571451
Epoch: 7.5, Train Loss: 7.037596, Test Loss: 7.550659
Epoch: 8.0, Train Loss: 7.082738, Test Loss: 7.634689
Epoch: 8.5, Train Loss: 7.136363, Test Loss: 7.576325
Epoch: 9.0, Train Loss: 7.046428, Test Loss: 7.594891
Epoch: 9.5, Train Loss: 7.022868, Test Loss: 7.588534
Epoch: 10.0, Train Loss: 7.075124, Test Loss: 7.532026
Epoch: 10.5, Train Loss: 7.078401, Test Loss: 7.519065
Epoch: 11.0, Train Loss: 7.109886, Test Loss: 7.550544
| [] | [
"TAGS\n#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | efeno/llama3_RAFT_4_epochs | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:26:04+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs32_lr5 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 6287 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.495780 | 13.831327 |
| 0.5 | 7.825472 | 7.840593 |
| 1.0 | 7.327533 | 7.785610 |
| 1.5 | 7.205367 | 7.586150 |
| 2.0 | 7.151769 | 7.663743 |
| 2.5 | 7.125600 | 8.101605 |
| 3.0 | 7.034717 | 7.773854 |
| 3.5 | 7.092155 | 7.549316 |
| 4.0 | 7.067814 | 7.819034 |
| 4.5 | 7.141888 | 7.587213 |
| 5.0 | 7.006890 | 7.892200 |
| 5.5 | 7.049742 | 7.752103 |
| 6.0 | 7.048553 | 7.844037 |
| 6.5 | 7.096755 | 7.641740 |
| 7.0 | 6.994647 | 7.617568 |
| 7.5 | 6.993773 | 7.864096 |
| 8.0 | 7.058714 | 7.730159 |
| 8.5 | 7.064419 | 7.629280 |
| 9.0 | 7.013462 | 7.746540 |
| 9.5 | 6.962919 | 8.147570 |
| 10.0 | 7.028505 | 7.587558 |
| 10.5 | 7.022366 | 7.531848 |
| {"language": "en", "tags": ["fill-mask"]} | damgomz/BERTrand_bs32_lr5 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:27:32+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us
| Environmental Impact (CODE CARBON DEFAULT)
------------------------------------------
Environmental Impact (for one core)
-----------------------------------
Note
----
30 April 2024
My Config
---------
Training and Testing steps
--------------------------
Epoch: 0.0, Train Loss: 15.495780, Test Loss: 13.831327
Epoch: 0.5, Train Loss: 7.825472, Test Loss: 7.840593
Epoch: 1.0, Train Loss: 7.327533, Test Loss: 7.785610
Epoch: 1.5, Train Loss: 7.205367, Test Loss: 7.586150
Epoch: 2.0, Train Loss: 7.151769, Test Loss: 7.663743
Epoch: 2.5, Train Loss: 7.125600, Test Loss: 8.101605
Epoch: 3.0, Train Loss: 7.034717, Test Loss: 7.773854
Epoch: 3.5, Train Loss: 7.092155, Test Loss: 7.549316
Epoch: 4.0, Train Loss: 7.067814, Test Loss: 7.819034
Epoch: 4.5, Train Loss: 7.141888, Test Loss: 7.587213
Epoch: 5.0, Train Loss: 7.006890, Test Loss: 7.892200
Epoch: 5.5, Train Loss: 7.049742, Test Loss: 7.752103
Epoch: 6.0, Train Loss: 7.048553, Test Loss: 7.844037
Epoch: 6.5, Train Loss: 7.096755, Test Loss: 7.641740
Epoch: 7.0, Train Loss: 6.994647, Test Loss: 7.617568
Epoch: 7.5, Train Loss: 6.993773, Test Loss: 7.864096
Epoch: 8.0, Train Loss: 7.058714, Test Loss: 7.730159
Epoch: 8.5, Train Loss: 7.064419, Test Loss: 7.629280
Epoch: 9.0, Train Loss: 7.013462, Test Loss: 7.746540
Epoch: 9.5, Train Loss: 6.962919, Test Loss: 8.147570
Epoch: 10.0, Train Loss: 7.028505, Test Loss: 7.587558
Epoch: 10.5, Train Loss: 7.022366, Test Loss: 7.531848
| [] | [
"TAGS\n#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-3b-conversational - bnb 8bits
- Model creator: https://huggingface.co/CreitinGameplays/
- Original model: https://huggingface.co/CreitinGameplays/bloom-3b-conversational/
Original model description:
---
license: mit
datasets:
- Xilabs/instructmix
- CreitinGameplays/small-chat-assistant-for-bloom
- sahil2801/CodeAlpaca-20k
language:
- en
tags:
- uncensored
- unrestricted
- code
- biology
- chemistry
- finance
- legal
- music
- art
- climate
- merge
- text-generation-inference
- moe
widget:
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> who was Nikola
Tesla? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about a cat. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> what is an
essay? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> Tell me 5
Brazilian waterfalls to visit. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about how a virus called COVID-19 destroyed the world </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a short
Python program that asks the user for their name and then greets them by
name. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> What can you do? </s> <|assistant|>
inference:
parameters:
temperature: 0.1
do_sample: true
top_k: 50
top_p: 0.10
max_new_tokens: 250
repetition_penalty: 1.155
---
## 🌸 BLOOM 3b Fine-tuned for Chat Assistant
<img src="https://creitingameplays.xyz/img/bloom.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
**Run this model on [Kaggle Notebook](https://www.kaggle.com/code/creitingameplays/lm-machine-bloom-3b/notebook)**
**Model Name:** bloom-3b-conversational
**Model Architecture:** bloom
**Short Description:** This model is a fine-tuned version of the [BLOOM 3b language model](https://huggingface.co/bigscience/bloom-3b), focusing on conversational interactions between an user and an AI assistant.
**Intended Use:** This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:
* Generating responses to user prompts in a chat assistant setting.
* Creating examples of chatbot interactions for further development.
* Studying the capabilities of language models for conversation.
**Limitations:**
* **Fine-tuning Focus:** The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.
* **Potential Biases:** The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.
* **Limited Factual Accuracy:** Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.
* **Primarily English:** While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.
**Specific Input Format:**
The model was fine-tuned using a specific input format that goes like this:
```
<|system|> {system prompt} </s> <|prompter|> {user prompt} </s> <|assistant|> {model response} ```
Using this format when interacting with the model can improve its performance and generate more relevant responses.
**Disclaimer:** This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability.
| {} | RichardErkhov/CreitinGameplays_-_bloom-3b-conversational-8bits | null | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-01T14:27:57+00:00 | [] | [] | TAGS
#transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
bloom-3b-conversational - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: mit
datasets:
- Xilabs/instructmix
- CreitinGameplays/small-chat-assistant-for-bloom
- sahil2801/CodeAlpaca-20k
language:
- en
tags:
- uncensored
- unrestricted
- code
- biology
- chemistry
- finance
- legal
- music
- art
- climate
- merge
- text-generation-inference
- moe
widget:
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> who was Nikola
Tesla? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about a cat. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> what is an
essay? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> Tell me 5
Brazilian waterfalls to visit. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about how a virus called COVID-19 destroyed the world </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a short
Python program that asks the user for their name and then greets them by
name. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> What can you do? </s> <|assistant|>
inference:
parameters:
temperature: 0.1
do_sample: true
top_k: 50
top_p: 0.10
max_new_tokens: 250
repetition_penalty: 1.155
---
## BLOOM 3b Fine-tuned for Chat Assistant
<img src="URL alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Run this model on Kaggle Notebook
Model Name: bloom-3b-conversational
Model Architecture: bloom
Short Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.
Intended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:
* Generating responses to user prompts in a chat assistant setting.
* Creating examples of chatbot interactions for further development.
* Studying the capabilities of language models for conversation.
Limitations:
* Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.
* Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.
* Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.
* Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.
Specific Input Format:
The model was fine-tuned using a specific input format that goes like this:
Using this format when interacting with the model can improve its performance and generate more relevant responses.
Disclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability.
| [
"## BLOOM 3b Fine-tuned for Chat Assistant\n\n<img src=\"URL alt=\"BigScience Logo\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nRun this model on Kaggle Notebook\n\nModel Name: bloom-3b-conversational\n\nModel Architecture: bloom\n\nShort Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.\n\nIntended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:\n\n* Generating responses to user prompts in a chat assistant setting.\n* Creating examples of chatbot interactions for further development.\n* Studying the capabilities of language models for conversation.\n\nLimitations:\n\n* Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.\n* Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.\n* Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.\n* Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.\n\nSpecific Input Format:\n\nThe model was fine-tuned using a specific input format that goes like this:\n\n\n\nUsing this format when interacting with the model can improve its performance and generate more relevant responses.\n\nDisclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability."
] | [
"TAGS\n#transformers #safetensors #bloom #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"## BLOOM 3b Fine-tuned for Chat Assistant\n\n<img src=\"URL alt=\"BigScience Logo\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\nRun this model on Kaggle Notebook\n\nModel Name: bloom-3b-conversational\n\nModel Architecture: bloom\n\nShort Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.\n\nIntended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:\n\n* Generating responses to user prompts in a chat assistant setting.\n* Creating examples of chatbot interactions for further development.\n* Studying the capabilities of language models for conversation.\n\nLimitations:\n\n* Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.\n* Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.\n* Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.\n* Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.\n\nSpecific Input Format:\n\nThe model was fine-tuned using a specific input format that goes like this:\n\n\n\nUsing this format when interacting with the model can improve its performance and generate more relevant responses.\n\nDisclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability."
] |
null | null | GPT-SoVITS: https://github.com/RVC-Boss/GPT-SoVITS
Training data: https://huggingface.co/datasets/hello2mao/Chinese_Audio_Resource/tree/main/%E7%94%9C%E5%B0%8F%E5%96%B5 | {} | miugod/gpt_sovits_txm | null | [
"region:us"
] | null | 2024-05-01T14:28:03+00:00 | [] | [] | TAGS
#region-us
| GPT-SoVITS: URL
Training data: URL | [] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | Cognitus-Stuti/llama3-8b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:28:18+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs64_lr6 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-06 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 3147 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.574399 | 15.096123 |
| 0.5 | 9.594637 | 8.148669 |
| 1.0 | 7.853338 | 8.074202 |
| 1.5 | 7.905947 | 7.939530 |
| 2.0 | 7.834033 | 7.833388 |
| 2.5 | 7.720610 | 7.871610 |
| 3.0 | 7.495963 | 7.976839 |
| 3.5 | 7.330389 | 7.752517 |
| 4.0 | 7.214343 | 7.848690 |
| 4.5 | 7.346055 | 7.724831 |
| 5.0 | 7.110836 | 7.715771 |
| 5.5 | 7.125741 | 7.595748 |
| 6.0 | 7.127250 | 7.659738 |
| 6.5 | 7.239036 | 7.671448 |
| 7.0 | 7.073343 | 7.705375 |
| 7.5 | 7.070813 | 7.589307 |
| 8.0 | 7.124647 | 7.582091 |
| 8.5 | 7.166616 | 7.539913 |
| 9.0 | 7.092505 | 7.611073 |
| 9.5 | 7.048057 | 7.625665 |
| 10.0 | 7.101367 | 7.564788 |
| 10.5 | 7.108332 | 7.602001 |
| 11.0 | 7.179604 | 7.554187 |
| {"language": "en", "tags": ["fill-mask"]} | damgomz/BERTrand_bs64_lr6 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:28:39+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us
| Environmental Impact (CODE CARBON DEFAULT)
------------------------------------------
Environmental Impact (for one core)
-----------------------------------
Note
----
30 April 2024
My Config
---------
Training and Testing steps
--------------------------
Epoch: 0.0, Train Loss: 15.574399, Test Loss: 15.096123
Epoch: 0.5, Train Loss: 9.594637, Test Loss: 8.148669
Epoch: 1.0, Train Loss: 7.853338, Test Loss: 8.074202
Epoch: 1.5, Train Loss: 7.905947, Test Loss: 7.939530
Epoch: 2.0, Train Loss: 7.834033, Test Loss: 7.833388
Epoch: 2.5, Train Loss: 7.720610, Test Loss: 7.871610
Epoch: 3.0, Train Loss: 7.495963, Test Loss: 7.976839
Epoch: 3.5, Train Loss: 7.330389, Test Loss: 7.752517
Epoch: 4.0, Train Loss: 7.214343, Test Loss: 7.848690
Epoch: 4.5, Train Loss: 7.346055, Test Loss: 7.724831
Epoch: 5.0, Train Loss: 7.110836, Test Loss: 7.715771
Epoch: 5.5, Train Loss: 7.125741, Test Loss: 7.595748
Epoch: 6.0, Train Loss: 7.127250, Test Loss: 7.659738
Epoch: 6.5, Train Loss: 7.239036, Test Loss: 7.671448
Epoch: 7.0, Train Loss: 7.073343, Test Loss: 7.705375
Epoch: 7.5, Train Loss: 7.070813, Test Loss: 7.589307
Epoch: 8.0, Train Loss: 7.124647, Test Loss: 7.582091
Epoch: 8.5, Train Loss: 7.166616, Test Loss: 7.539913
Epoch: 9.0, Train Loss: 7.092505, Test Loss: 7.611073
Epoch: 9.5, Train Loss: 7.048057, Test Loss: 7.625665
Epoch: 10.0, Train Loss: 7.101367, Test Loss: 7.564788
Epoch: 10.5, Train Loss: 7.108332, Test Loss: 7.602001
Epoch: 11.0, Train Loss: 7.179604, Test Loss: 7.554187
| [] | [
"TAGS\n#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us \n"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) as a base.
### Models Merged
The following models were included in the merge:
* [MaziyarPanahi/Calme-7B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.3
parameters:
density: 0.53
weight: 0.4
- model: MTSAIR/multi_verse_model
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: MTSAIR/multi_verse_model
parameters:
int8_mask: true
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["MTSAIR/multi_verse_model", "MaziyarPanahi/Calme-7B-Instruct-v0.3"]} | Syed-Hasan-8503/Versatile-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:MTSAIR/multi_verse_model",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:28:55+00:00 | [
"2311.03099",
"2306.01708"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-MTSAIR/multi_verse_model #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the DARE TIES merge method using MTSAIR/multi_verse_model as a base.
### Models Merged
The following models were included in the merge:
* MaziyarPanahi/Calme-7B-Instruct-v0.3
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using MTSAIR/multi_verse_model as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Calme-7B-Instruct-v0.3",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #arxiv-2311.03099 #arxiv-2306.01708 #base_model-MTSAIR/multi_verse_model #base_model-MaziyarPanahi/Calme-7B-Instruct-v0.3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the DARE TIES merge method using MTSAIR/multi_verse_model as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Calme-7B-Instruct-v0.3",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs64_lr5 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-05 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 3148 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.402081 | 13.817045 |
| 0.5 | 8.054414 | 7.829068 |
| 1.0 | 4.816239 | 3.114260 |
| 1.5 | 2.206430 | 2.955595 |
| 2.0 | 2.189819 | 2.872115 |
| 2.5 | 2.418134 | 2.865437 |
| 3.0 | 2.349051 | 2.810524 |
| 3.5 | 2.102283 | 2.820134 |
| 4.0 | 1.907061 | 2.957294 |
| 4.5 | 2.326205 | 2.785392 |
| 5.0 | 2.257292 | 2.737638 |
| 5.5 | 2.127350 | 2.733068 |
| 6.0 | 1.883285 | 2.774372 |
| 6.5 | 2.100682 | 2.667502 |
| 7.0 | 2.194973 | 2.628296 |
| 7.5 | 2.163919 | 2.643665 |
| 8.0 | 1.850441 | 2.637510 |
| 8.5 | 1.968181 | 2.632833 |
| 9.0 | 2.121989 | 2.625116 |
| 9.5 | 2.136497 | 2.646418 |
| 10.0 | 1.891819 | 2.655790 |
| 10.5 | 1.822331 | 2.596789 |
| 11.0 | 2.116980 | 2.584595 |
| {"language": "en", "tags": ["fill-mask"]} | damgomz/BERTrand_bs64_lr5 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:29:34+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us
| Environmental Impact (CODE CARBON DEFAULT)
------------------------------------------
Environmental Impact (for one core)
-----------------------------------
Note
----
30 April 2024
My Config
---------
Training and Testing steps
--------------------------
Epoch: 0.0, Train Loss: 15.402081, Test Loss: 13.817045
Epoch: 0.5, Train Loss: 8.054414, Test Loss: 7.829068
Epoch: 1.0, Train Loss: 4.816239, Test Loss: 3.114260
Epoch: 1.5, Train Loss: 2.206430, Test Loss: 2.955595
Epoch: 2.0, Train Loss: 2.189819, Test Loss: 2.872115
Epoch: 2.5, Train Loss: 2.418134, Test Loss: 2.865437
Epoch: 3.0, Train Loss: 2.349051, Test Loss: 2.810524
Epoch: 3.5, Train Loss: 2.102283, Test Loss: 2.820134
Epoch: 4.0, Train Loss: 1.907061, Test Loss: 2.957294
Epoch: 4.5, Train Loss: 2.326205, Test Loss: 2.785392
Epoch: 5.0, Train Loss: 2.257292, Test Loss: 2.737638
Epoch: 5.5, Train Loss: 2.127350, Test Loss: 2.733068
Epoch: 6.0, Train Loss: 1.883285, Test Loss: 2.774372
Epoch: 6.5, Train Loss: 2.100682, Test Loss: 2.667502
Epoch: 7.0, Train Loss: 2.194973, Test Loss: 2.628296
Epoch: 7.5, Train Loss: 2.163919, Test Loss: 2.643665
Epoch: 8.0, Train Loss: 1.850441, Test Loss: 2.637510
Epoch: 8.5, Train Loss: 1.968181, Test Loss: 2.632833
Epoch: 9.0, Train Loss: 2.121989, Test Loss: 2.625116
Epoch: 9.5, Train Loss: 2.136497, Test Loss: 2.646418
Epoch: 10.0, Train Loss: 1.891819, Test Loss: 2.655790
Epoch: 10.5, Train Loss: 1.822331, Test Loss: 2.596789
Epoch: 11.0, Train Loss: 2.116980, Test Loss: 2.584595
| [] | [
"TAGS\n#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 190422.01247096065 |
| Emissions (Co2eq in kg) | 0.1993119180945443 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 37.5 |
| CPU energy (kWh) | 2.2480348206905836 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 1.9835508801341004 |
| Consumed energy (kWh) | 4.231585700824702 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.3665623740065992 |
| Emissions (Co2eq in kg) | 0.07458195488445958 |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs16_lr5 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 12590 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.505644 | 13.835570 |
| 0.5 | 2.703247 | 3.216191 |
| 1.0 | 2.525614 | 3.149906 |
| 1.5 | 2.203586 | 3.040256 |
| 2.0 | 2.160747 | 2.952961 |
| 2.5 | 2.370840 | 2.949682 |
| 3.0 | 2.350076 | 2.925128 |
| 3.5 | 2.110838 | 2.981181 |
| 4.0 | 1.903310 | 2.836523 |
| 4.5 | 2.270815 | 2.814076 |
| 5.0 | 2.256549 | 2.848042 |
| 5.5 | 2.214569 | 2.812752 |
| 6.0 | 1.896058 | 2.755932 |
| 6.5 | 2.082173 | 2.737672 |
| 7.0 | 2.156056 | 2.711664 |
| 7.5 | 2.133011 | 2.690121 |
| 8.0 | 1.826039 | 2.718429 |
| 8.5 | 1.919896 | 2.646301 |
| 9.0 | 2.058763 | 2.616522 |
| 9.5 | 2.088796 | 2.676844 |
| 10.0 | 1.848681 | 2.622713 |
| 10.5 | 1.776635 | 2.581153 |
| 11.0 | 2.059736 | 2.579319 |
| 11.5 | 2.055293 | 2.591116 |
| 12.0 | 1.912107 | 2.555768 |
| {"language": "en", "tags": ["fill-mask"], "kwargs": {"timestamp": "2024-05-03T20:37:09", "project_name": "BERTrand_bs16_lr5_emissions_tracker", "run_id": "49c906ec-058d-4552-a3a9-71ef3ba22844", "duration": 190422.01247096065, "emissions": 0.1993119180945443, "emissions_rate": 1.046685283430346e-06, "cpu_power": 42.5, "gpu_power": 0.0, "ram_power": 37.5, "cpu_energy": 2.2480348206905836, "gpu_energy": 0, "ram_energy": 1.9835508801341004, "energy_consumed": 4.231585700824702, "country_name": "Switzerland", "country_iso_code": "CHE", "region": NaN, "cloud_provider": NaN, "cloud_region": NaN, "os": "Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34", "python_version": "3.10.4", "codecarbon_version": "2.3.4", "cpu_count": 4, "cpu_model": "Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz", "gpu_count": NaN, "gpu_model": NaN, "longitude": NaN, "latitude": NaN, "ram_total_size": 100, "tracking_mode": "machine", "on_cloud": "N", "pue": 1.0}} | damgomz/BERTrand_bs16_lr5 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:29:40+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us
| Environmental Impact (CODE CARBON DEFAULT)
------------------------------------------
Environmental Impact (for one core)
-----------------------------------
Note
----
30 April 2024
My Config
---------
Training and Testing steps
--------------------------
Epoch: 0.0, Train Loss: 15.505644, Test Loss: 13.835570
Epoch: 0.5, Train Loss: 2.703247, Test Loss: 3.216191
Epoch: 1.0, Train Loss: 2.525614, Test Loss: 3.149906
Epoch: 1.5, Train Loss: 2.203586, Test Loss: 3.040256
Epoch: 2.0, Train Loss: 2.160747, Test Loss: 2.952961
Epoch: 2.5, Train Loss: 2.370840, Test Loss: 2.949682
Epoch: 3.0, Train Loss: 2.350076, Test Loss: 2.925128
Epoch: 3.5, Train Loss: 2.110838, Test Loss: 2.981181
Epoch: 4.0, Train Loss: 1.903310, Test Loss: 2.836523
Epoch: 4.5, Train Loss: 2.270815, Test Loss: 2.814076
Epoch: 5.0, Train Loss: 2.256549, Test Loss: 2.848042
Epoch: 5.5, Train Loss: 2.214569, Test Loss: 2.812752
Epoch: 6.0, Train Loss: 1.896058, Test Loss: 2.755932
Epoch: 6.5, Train Loss: 2.082173, Test Loss: 2.737672
Epoch: 7.0, Train Loss: 2.156056, Test Loss: 2.711664
Epoch: 7.5, Train Loss: 2.133011, Test Loss: 2.690121
Epoch: 8.0, Train Loss: 1.826039, Test Loss: 2.718429
Epoch: 8.5, Train Loss: 1.919896, Test Loss: 2.646301
Epoch: 9.0, Train Loss: 2.058763, Test Loss: 2.616522
Epoch: 9.5, Train Loss: 2.088796, Test Loss: 2.676844
Epoch: 10.0, Train Loss: 1.848681, Test Loss: 2.622713
Epoch: 10.5, Train Loss: 1.776635, Test Loss: 2.581153
Epoch: 11.0, Train Loss: 2.059736, Test Loss: 2.579319
Epoch: 11.5, Train Loss: 2.055293, Test Loss: 2.591116
Epoch: 12.0, Train Loss: 1.912107, Test Loss: 2.555768
| [] | [
"TAGS\n#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us \n"
] |
fill-mask | transformers |
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
30 April 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | BERTrand_bs16_lr6 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 12597 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 15.499363 | 15.126202 |
| 0.5 | 8.243008 | 8.091313 |
| 1.0 | 7.354550 | 7.973097 |
| 1.5 | 7.244512 | 7.687902 |
| 2.0 | 7.199471 | 7.686438 |
| 2.5 | 7.161911 | 7.657964 |
| 3.0 | 7.049606 | 7.765894 |
| 3.5 | 7.091233 | 7.786471 |
| 4.0 | 7.060327 | 7.814007 |
| 4.5 | 7.164975 | 7.602441 |
| 5.0 | 7.008690 | 7.586652 |
| 5.5 | 7.035732 | 7.525334 |
| 6.0 | 7.045719 | 7.715493 |
| 6.5 | 7.108394 | 7.486681 |
| 7.0 | 6.992529 | 7.641908 |
| 7.5 | 6.993111 | 7.575982 |
| 8.0 | 7.051304 | 7.670884 |
| 8.5 | 7.075128 | 7.527084 |
| 9.0 | 7.010871 | 7.556140 |
| 9.5 | 6.978617 | 7.623441 |
| 10.0 | 7.033166 | 7.562979 |
| 10.5 | 7.026733 | 7.549661 |
| 11.0 | 7.080350 | 7.510597 |
| 11.5 | 6.955319 | 7.589880 |
| {"language": "en", "tags": ["fill-mask"]} | damgomz/BERTrand_bs16_lr6 | null | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:29:56+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us
| Environmental Impact (CODE CARBON DEFAULT)
------------------------------------------
Environmental Impact (for one core)
-----------------------------------
Note
----
30 April 2024
My Config
---------
Training and Testing steps
--------------------------
Epoch: 0.0, Train Loss: 15.499363, Test Loss: 15.126202
Epoch: 0.5, Train Loss: 8.243008, Test Loss: 8.091313
Epoch: 1.0, Train Loss: 7.354550, Test Loss: 7.973097
Epoch: 1.5, Train Loss: 7.244512, Test Loss: 7.687902
Epoch: 2.0, Train Loss: 7.199471, Test Loss: 7.686438
Epoch: 2.5, Train Loss: 7.161911, Test Loss: 7.657964
Epoch: 3.0, Train Loss: 7.049606, Test Loss: 7.765894
Epoch: 3.5, Train Loss: 7.091233, Test Loss: 7.786471
Epoch: 4.0, Train Loss: 7.060327, Test Loss: 7.814007
Epoch: 4.5, Train Loss: 7.164975, Test Loss: 7.602441
Epoch: 5.0, Train Loss: 7.008690, Test Loss: 7.586652
Epoch: 5.5, Train Loss: 7.035732, Test Loss: 7.525334
Epoch: 6.0, Train Loss: 7.045719, Test Loss: 7.715493
Epoch: 6.5, Train Loss: 7.108394, Test Loss: 7.486681
Epoch: 7.0, Train Loss: 6.992529, Test Loss: 7.641908
Epoch: 7.5, Train Loss: 6.993111, Test Loss: 7.575982
Epoch: 8.0, Train Loss: 7.051304, Test Loss: 7.670884
Epoch: 8.5, Train Loss: 7.075128, Test Loss: 7.527084
Epoch: 9.0, Train Loss: 7.010871, Test Loss: 7.556140
Epoch: 9.5, Train Loss: 6.978617, Test Loss: 7.623441
Epoch: 10.0, Train Loss: 7.033166, Test Loss: 7.562979
Epoch: 10.5, Train Loss: 7.026733, Test Loss: 7.549661
Epoch: 11.0, Train Loss: 7.080350, Test Loss: 7.510597
Epoch: 11.5, Train Loss: 6.955319, Test Loss: 7.589880
| [] | [
"TAGS\n#transformers #safetensors #albert #pretraining #fill-mask #en #endpoints_compatible #region-us \n"
] |
null | transformers |
# Uploaded model
- **Developed by:** felixml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | felixml/Llama-3-8B-synthetic_text_to_sql-60-steps-q8_0-gguf | null | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:09+00:00 | [] | [
"en"
] | TAGS
#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: felixml
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: felixml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: felixml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3_on_scigen_v2
This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "model-index": [{"name": "Llama3_on_scigen_v2", "results": []}]} | moetezsa/Llama3_on_scigen_v2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:llama2",
"region:us"
] | null | 2024-05-01T14:31:16+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-unsloth/llama-3-8b-bnb-4bit #license-llama2 #region-us
|
# Llama3_on_scigen_v2
This model is a fine-tuned version of unsloth/llama-3-8b-bnb-4bit on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# Llama3_on_scigen_v2\n\nThis model is a fine-tuned version of unsloth/llama-3-8b-bnb-4bit on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-unsloth/llama-3-8b-bnb-4bit #license-llama2 #region-us \n",
"# Llama3_on_scigen_v2\n\nThis model is a fine-tuned version of unsloth/llama-3-8b-bnb-4bit on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 30",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 400
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["minds14"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "my_awesome_asr_mind_model", "results": []}]} | rahul9699/my_awesome_asr_mind_model | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:16+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
|
# my_awesome_asr_mind_model
This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 400
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# my_awesome_asr_mind_model\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 250\n- training_steps: 400",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"# my_awesome_asr_mind_model\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-06\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 250\n- training_steps: 400",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | transformers |
# Uploaded model
- **Developed by:** felixml
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | felixml/Llama-3-8B-synthetic_text_to_sql-60-steps-lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:28+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: felixml
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: felixml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: felixml\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5799
- Precision: 0.8808
- Recall: 0.9106
- F1: 0.8955
- Accuracy: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.3333 | 100 | 0.6686 | 0.7452 | 0.8251 | 0.7831 | 0.7535 |
| No log | 2.6667 | 200 | 0.4724 | 0.8064 | 0.8713 | 0.8376 | 0.8389 |
| No log | 4.0 | 300 | 0.4922 | 0.8612 | 0.8942 | 0.8774 | 0.8481 |
| No log | 5.3333 | 400 | 0.4632 | 0.8587 | 0.8997 | 0.8787 | 0.8521 |
| 0.544 | 6.6667 | 500 | 0.4850 | 0.8632 | 0.9031 | 0.8827 | 0.8474 |
| 0.544 | 8.0 | 600 | 0.5024 | 0.8744 | 0.8992 | 0.8866 | 0.8451 |
| 0.544 | 9.3333 | 700 | 0.5394 | 0.8768 | 0.9155 | 0.8957 | 0.8565 |
| 0.544 | 10.6667 | 800 | 0.5647 | 0.8800 | 0.9146 | 0.8970 | 0.8550 |
| 0.544 | 12.0 | 900 | 0.5798 | 0.8847 | 0.9106 | 0.8974 | 0.8545 |
| 0.1288 | 13.3333 | 1000 | 0.5799 | 0.8808 | 0.9106 | 0.8955 | 0.8507 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "datasets": ["funsd-layoutlmv3"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/layoutlmv3-base", "model-index": [{"name": "test", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "funsd-layoutlmv3", "type": "funsd-layoutlmv3", "config": "funsd", "split": "test", "args": "funsd"}, "metrics": [{"type": "precision", "value": 0.8808265257087938, "name": "Precision"}, {"type": "recall", "value": 0.910581222056632, "name": "Recall"}, {"type": "f1", "value": 0.895456765999023, "name": "F1"}, {"type": "accuracy", "value": 0.8507072387970998, "name": "Accuracy"}]}]}]} | cor-c/test | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:31:32+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #layoutlmv3 #token-classification #generated_from_trainer #dataset-funsd-layoutlmv3 #base_model-microsoft/layoutlmv3-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| test
====
This model is a fine-tuned version of microsoft/layoutlmv3-base on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5799
* Precision: 0.8808
* Recall: 0.9106
* F1: 0.8955
* Accuracy: 0.8507
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 1000
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.1.1+cu118
* Datasets 2.15.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.1.1+cu118\n* Datasets 2.15.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #layoutlmv3 #token-classification #generated_from_trainer #dataset-funsd-layoutlmv3 #base_model-microsoft/layoutlmv3-base #license-cc-by-nc-sa-4.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.1.1+cu118\n* Datasets 2.15.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | ammar4567/FYP-finetune | null | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:32:25+00:00 | [] | [] | TAGS
#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit AutoTrain.
# Usage
| [
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] | [
"TAGS\n#transformers #safetensors #autotrain #text-generation-inference #text-generation #peft #conversational #license-other #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\nThis model was trained using AutoTrain. For more information, please visit AutoTrain.",
"# Usage"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ppo_zephyr310
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 32
- total_train_batch_size: 224
- total_eval_batch_size: 56
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "ppo_zephyr310", "results": []}]} | cleanrl/ppo_zephyr310 | null | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:33:11+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ppo_zephyr310
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 32
- total_train_batch_size: 224
- total_eval_batch_size: 56
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"# ppo_zephyr310\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 7\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 224\n- total_eval_batch_size: 56\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #generated_from_trainer #conversational #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ppo_zephyr310\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 7\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 224\n- total_eval_batch_size: 56\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-bert
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.6929
- Validation Loss: 8.7752
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.6929 | 8.7752 | 0 |
### Framework versions
- Transformers 4.41.0.dev0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "pretrained-bert", "results": []}]} | Diluzx/pretrained-bert | null | [
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:34:37+00:00 | [] | [] | TAGS
#transformers #tf #bert #pretraining #generated_from_keras_callback #endpoints_compatible #region-us
| pretrained-bert
===============
This model is a fine-tuned version of [](URL on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 8.6929
* Validation Loss: 8.7752
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': 0.001, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': 0.001, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #bert #pretraining #generated_from_keras_callback #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': 0.001, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Reihaneh/wav2vec2_fy_nl_common_voice_17 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:35:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Poojithpoosa/myemotion_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5775
- Validation Loss: 1.5589
- Train Accuracy: 0.3475
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.002, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.5987 | 1.5625 | 0.3475 | 0 |
| 1.5827 | 1.5605 | 0.3475 | 1 |
| 1.5775 | 1.5589 | 0.3475 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.1
- Datasets 2.19.0
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Poojithpoosa/myemotion_model", "results": []}]} | Poojithpoosa/myemotion_model | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:37:46+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Poojithpoosa/myemotion\_model
=============================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 1.5775
* Validation Loss: 1.5589
* Train Accuracy: 0.3475
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 0.002, 'decay\_steps': 5000, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.24.0
* TensorFlow 2.10.1
* Datasets 2.19.0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 0.002, 'decay\\_steps': 5000, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.24.0\n* TensorFlow 2.10.1\n* Datasets 2.19.0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 0.002, 'decay\\_steps': 5000, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.24.0\n* TensorFlow 2.10.1\n* Datasets 2.19.0\n* Tokenizers 0.11.0"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Waktaverse-Llama-3-KO-8B-Instruct - bnb 4bits
- Model creator: https://huggingface.co/PathFinderKR/
- Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details

Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=40
top_p=0.9
repetition_penalty=1.1
def generate_response(system ,user):
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "다음 지시사항에 대한 응답을 작성해주세요."
user_prompt = "피보나치 수열에 대해 설명해주세요."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다.
예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다.
피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다.
참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
per_device_eval_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
weight_decay=0.1
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
TBD
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## Model Card Authors
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:38:20+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Waktaverse-Llama-3-KO-8B-Instruct - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details
!image/webp
Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- Developed by: Waktaverse AI
- Model type: Large Language Model
- Language(s) (NLP): Korean, English
- License: Llama3
- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct
## Model Sources
- Repository: GitHub
- Paper :
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
### Example Output
## Training Details
### Training Data
The model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- GPU: NVIDIA GeForce RTX 4080 SUPER
#### Software
- Operating System: Linux
- Deep Learning Framework: Hugging Face Transformers, PyTorch
### Training Details
- Training time: 32 hours
- VRAM usage: 12.8 GB
- GPU power usage: 300 W
Waktaverse-Llama-3
Llama-3
## Model Card Authors
## Model Card Contact
| [
"# Waktaverse-Llama-3-KO-8B-Instruct Model Card",
"## Model Details\n\n!image/webp\nWaktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.\nThis large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. \nIt is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.\n\n- Developed by: Waktaverse AI\n- Model type: Large Language Model\n- Language(s) (NLP): Korean, English\n- License: Llama3\n- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct",
"## Model Sources\n\n- Repository: GitHub\n- Paper :",
"## Uses",
"### Direct Use\n\nThe model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.",
"### Out-of-Scope Use\n\nThis model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. \nMoreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.",
"## Bias, Risks, and Limitations\n\nWhile Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. \nThere is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.",
"## How to Get Started with the Model\n\nYou can run conversational inference using the Transformers Auto classes.\nWe highly recommend that you add Korean system prompt for better output.\nAdjust the hyperparameters as you need.",
"### Example Usage",
"### Example Output",
"## Training Details",
"### Training Data\n\nThe model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.",
"### Training Procedure\n\nThe model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.",
"#### Training Hyperparameters",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Technical Specifications",
"### Compute Infrastructure",
"#### Hardware\n\n- GPU: NVIDIA GeForce RTX 4080 SUPER",
"#### Software\n\n- Operating System: Linux\n- Deep Learning Framework: Hugging Face Transformers, PyTorch",
"### Training Details\n\n- Training time: 32 hours\n- VRAM usage: 12.8 GB\n- GPU power usage: 300 W\n\n\n\nWaktaverse-Llama-3\n\n\n\nLlama-3",
"## Model Card Authors",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Waktaverse-Llama-3-KO-8B-Instruct Model Card",
"## Model Details\n\n!image/webp\nWaktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.\nThis large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. \nIt is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.\n\n- Developed by: Waktaverse AI\n- Model type: Large Language Model\n- Language(s) (NLP): Korean, English\n- License: Llama3\n- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct",
"## Model Sources\n\n- Repository: GitHub\n- Paper :",
"## Uses",
"### Direct Use\n\nThe model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.",
"### Out-of-Scope Use\n\nThis model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. \nMoreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.",
"## Bias, Risks, and Limitations\n\nWhile Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. \nThere is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.",
"## How to Get Started with the Model\n\nYou can run conversational inference using the Transformers Auto classes.\nWe highly recommend that you add Korean system prompt for better output.\nAdjust the hyperparameters as you need.",
"### Example Usage",
"### Example Output",
"## Training Details",
"### Training Data\n\nThe model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.",
"### Training Procedure\n\nThe model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.",
"#### Training Hyperparameters",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Technical Specifications",
"### Compute Infrastructure",
"#### Hardware\n\n- GPU: NVIDIA GeForce RTX 4080 SUPER",
"#### Software\n\n- Operating System: Linux\n- Deep Learning Framework: Hugging Face Transformers, PyTorch",
"### Training Details\n\n- Training time: 32 hours\n- VRAM usage: 12.8 GB\n- GPU power usage: 300 W\n\n\n\nWaktaverse-Llama-3\n\n\n\nLlama-3",
"## Model Card Authors",
"## Model Card Contact"
] |
text-generation | transformers |
## Llama-3-KoEn-8B-Instruct-preview
> Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B model](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
## Model Details
**Llama-3-KoEn-8B-Instruct-preview**
Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.
The train was done on TPUv4-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799),
I released Instruction model named [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-KoEn-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
피보나치 수열은 이탈리아 수학자 레오나르도 피보나치의 이름을 따서 유래된 수열입니다. 피보나치 수열은 특정 조건에 따라 얻어지는 등차수열로, 다음 수를 구하는 단순한 규칙에 의해 생성되는 수열입니다.
1, 1, 2, 3, 5, 8, 13, 21,... (등차 수열)
이러한 수열은 자연 세계에도 많이 발견됩니다. 예를 들어, 새싹이 자라는 방식, 달걀이 부화되는 횟수, 코끼리의 생장 등이 피보나치 수열을 따르기 때문입니다. 🌱🦆🐮
지금부터는 파이썬 코드를 작성해 볼까요? 😊
먼저, 파이썬 코드를 작성하기 위해 필요한 모듈은 없습니다. 저희는 피보나치 수열을 생성하는 알고리즘을 작성할 것입니다.
1. 파이썬 코드 생성
다음은 피보나치 수열을 생성하는 코드입니다. 이 코드는 `while` 루프를 사용하여 피보나치 수열을 생성합니다.
```python
def fib(n):
if n <= 1:
return n
a, b = 0, 1
for _ in range(2, n+1):
a, b = b, a + b
return b
n = int(input("피보나치 수열을 얼마나 생성할지 입력하세요: "))
for i in range(n):
print(fib(i+1))
```
2. 코드 설명
위 코드는 `fib(n)` 함수를 정의합니다. 이 함수는 `n`번째 피보나치 수를 반환합니다.
`fib(n)` 함수는 `if` 조건문을 사용하여 `n`이 0 또는 1일 경우 해당 값은 즉시 반환합니다.
`a, b = 0, 1`은 피보나치 수열의 초기 값을 설정합니다. `a`는 `0`을, `b`는 `1`을 할당합니다.
`for` 루프는 `range(2, n+1)`에서 생성되는 반복을 수행합니다. 이 반복은 `a, b = b, a + b`를 수행하여 피보나치 수열을 생성합니다.
`print(fib(i+1))`는 `fib(n)` 함수 호출 결과를 출력합니다.
3. 코드 실행
위 코드를 실행하기 위해 파이썬 인터프레터에서 다음과 같이 실행할 수 있습니다.
```python
$ python fib.py
피보나치 수열을 얼마나 생성할지 입력하세요: 10
1
1
2
3
5
8
13
21
34
55
```
위 코드를 실행하면, 사용자는 `피보나치 수열을 얼마나 생성할지 입력하세요:`를 입력할 수 있습니다. 이 입력은 `n`에 할당됩니다. `n`은 코드가 피보나치 수열을 생성하는 횟수를 결정하는 변수입니다. 사용자는 이 횟수를 입력하여 피보나치 수열을 생성할 수 있습니다.
이상 파이썬 코드를 작성하고 실행하는 방법을 설명했습니다. 😊
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
---
대한민국 헌법은 1948년 7월 17일 제정된 이래 현재까지 대한민국의 기본법인 국가의 기본질서와 국가기관의 권한, 국민의 권리·의무, 통치조직, 통치방법 등을 규정하는 법률입니다. 헌법은 대한민국의 최고법령으로, 일반법상 모든 법률의 근거가 됩니다. 또한 헌법은 1987년 제정된 제6공화국 헌법부터는 '민주공화국'을 기본질서로 규정하고 있습니다.
** 주요 내용 **
1. 기본질서 : 민주공화국, 통일조국의 원칙, 인권의 보장
2. 국민의 권리 : 신체의 자유, 기본적 인권, 재산권, 참정권
3. 국가기관 : 대통령, 국무총리, 국회, 대법원, 헌법재판소, 검찰
4. 통치조직 : 행정부, 입법부, 사법부의 분리
5. 통치방법 : 민주주의(국민의 주권에 의하여 국민으로부터 위임받은 권력 행사), 법치주의(법률에 의하여 통치)
** 주요 조항 **
1. 제1조 : 대한민국은 민주공화국임을 천명
2. 제2조 : 민주공화국의 주권은 국민에게 있으며, 모든 권력은 국민으로부터 나온다
3. 제10조 : 모든 국민은 행복을 추구할 권리를 가진다
4. 제13조 : 모든 국민은 주거의 자유를 갖는다
5. 제25조 : 국가는 재해를 방지하고 그에 대처하는 시책을 시행하여야 한다
6. 제64조 : 대통령은 법률을 공포하고, 국무총리와 국무위원을 임명한다
7. 제111조 : 대법원은 최종적인 법률심판을 행하고, 헌법재판소는 헌법에 위반되는지 여부를 결정
** 헌법의 효력 **
헌법은 대한민국의 기본법률이므로 모든 국민은 헌법에 따라야 합니다. 또한 헌법은 변경하려면 헌법개정안을 제안하고, 이를 국민투표에 붙여야 하며, 이를 통과하면 새 헌법을 제정하게 됩니다. (제130조)
이상, 대한민국 헌법의 주요 내용과 조항, 효력을 설명하였습니다. 한국 헌법은 민주공화국의 기본질서를 중시하고, 기본권을 보호할 뿐만 아니라, 통치조직의 분리, 법치주의, 민주주의를 근간으로 합니다. 🇰🇷
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "이순신 장군에 대해 설명해줘."},
]
---
이순신 장군은 조선의 문신, 무신, 해군의 영웅으로 1592년부터 1598년까지 임진왜란과 정유재란 동안 해군으로서 활약했습니다. 그는 1591년 전라좌도 수군절도사가 되어 선제공격 전통을 이어받은 바, '신속성'과 '신용', '도탄전' 등의 병법을 도입하여 조선 수군을 개편했습니다. 이순신 장군의 가장 큰 업적으로는 두 가지를 꼽을 수 있습니다. 첫 번째는 '한산도 대첩'입니다. 1592년 7월 8일, 이순신 장군은 이끄는 조선 수군 13척이 왜군 133척에 맞서 13척의 배로는 불가능한 전쟁을 벌이게 됩니다. 이때 이순신 장군은 '진퇴유곡'의 전술을 사용하여 13척의 배를 133척의 배와 싸워 승리를 거두었습니다. 두 번째는 '노량해전'입니다. 1598년 11월 14일, 이순신 장군은 노량에서 12척의 배로 330척의 배를 이끄는 와키자카 야스하루와 싸우게 됩니다. 이때 이순신 장군의 배에 명중된 총알이 1발이었고, 12척의 배가 고전을 면치 못했지만, 이순신 장군은 끝까지 싸워 2명을 제외하고는 전원 전사하게 됩니다. 이순신 장군의 전투를 비롯하여 그의 업적은 조선 수군의 전투 승리를 이끌었으며, 그의 이름은 한국의 국보 제76호로 등재되어 있습니다. 이순신 장군의 업적과 정신은 한국 역사와 문화에 지대한 영향을 끼쳤습니다.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "독도와 다케시마의 차이가 뭐야?"},
]
---
! `한국의 고유영토`와 `일본의 영유권 과장` 차이! 🌊
일본이 주장하는 `다케시마(竹島)`는, 실제로는 '울릉도' 서쪽 약 87km 지점에 있는 '독도(獨島)'를 가리킵니다. 한국이 역사적으로부터 지배하고 관리해온 것은 물론, 유네스코 세계유산으로 등재된 `대한민국의 영토`이지요. 😎
한편, 독도는 한국의 고유 이름으로, `울릉도`와 함께 `삼국사기`에 처음 기재된 `대한민국의 고유 섬`입니다. 1886년 일본이 '다케시마'라고 명명한 이래, 공식적으로 `일본 고유 섬`으로 주장하고 있지만, 이는 `역사 왜곡`입니다! ⚠️
따라서, 세계적으로 인정받는 국가의 `영토`는 한국의 `독도`가 맞는 것이지요! 🔥 `울릉도`도 `독도`도 한국의 고유 영토에 속한 섬들이랍니다! 💕
혹시 더 궁금한 사실이 있으면 말씀해주세요! 🤔
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
---
아주 흥미로운 사건입니다. 세종대왕이 집현전에서 맥북을 던졌다는 이야기는 없습니다. 실제로 세종대왕은 1418년 1월 9일 집현전에서 26명의 문신과 함께 새로운 훈민정음 반포를 위해 간자체의 글씨를 사용하는 것을 반대한 홍윤성 등의 무리와의 논쟁을 벌였습니다. 당시 세종대왕은 이들의 주장을 단호히 거부했습니다. 이 사건은 '집현전 논박 사건'으로 알려져 있습니다. 맥북과 관련된 일이 아니라는 것이죠. 맥북은 2006년 애플이 출시한 노트북의 브랜드명입니다. 세종대왕이 살았던 시기는 1418년에서 1450년까지입니다. 맥북은 훨씬 뒤의 일입니다. 😅
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "나는 오늘 사과 5개를 갖고있어. 그리고 어제 나는 사과 2개를 먹었지. 내가 지금 가진 사과는 몇 개지?"},
]
---
👀 당신은 오늘 5개의 사과를 갖고 있습니다. 그리고 어제 2개를 먹었다고 하셨으니, 오늘 현재 당신이 갖고 있는 사과는 5개에서 2개를 뺀 3개입니다! 😊
```
| {"language": ["en", "ko"], "license": "cc-by-nc-sa-4.0", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | beomi/Llama-3-KoEn-8B-Instruct-preview | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:39:09+00:00 | [
"2310.04799"
] | [
"en",
"ko"
] | TAGS
#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Llama-3-KoEn-8B-Instruct-preview
> Update @ 2024.05.01: Pre-Release Llama-3-KoEn-8B model & Llama-3-KoEn-8B-Instruct-preview
## Model Details
Llama-3-KoEn-8B-Instruct-preview
Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.
The train was done on TPUv4-256, with the warm support from TRC program by Google.
With applying the idea from Chat Vector paper,
I released Instruction model named Llama-3-KoEn-8B-Instruct-preview.
Since it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models.
Sample usage
Sample Output
피보나치 수열은 이탈리아 수학자 레오나르도 피보나치의 이름을 따서 유래된 수열입니다. 피보나치 수열은 특정 조건에 따라 얻어지는 등차수열로, 다음 수를 구하는 단순한 규칙에 의해 생성되는 수열입니다.
1, 1, 2, 3, 5, 8, 13, 21,... (등차 수열)
이러한 수열은 자연 세계에도 많이 발견됩니다. 예를 들어, 새싹이 자라는 방식, 달걀이 부화되는 횟수, 코끼리의 생장 등이 피보나치 수열을 따르기 때문입니다.
지금부터는 파이썬 코드를 작성해 볼까요?
먼저, 파이썬 코드를 작성하기 위해 필요한 모듈은 없습니다. 저희는 피보나치 수열을 생성하는 알고리즘을 작성할 것입니다.
1. 파이썬 코드 생성
다음은 피보나치 수열을 생성하는 코드입니다. 이 코드는 'while' 루프를 사용하여 피보나치 수열을 생성합니다.
2. 코드 설명
위 코드는 'fib(n)' 함수를 정의합니다. 이 함수는 'n'번째 피보나치 수를 반환합니다.
'fib(n)' 함수는 'if' 조건문을 사용하여 'n'이 0 또는 1일 경우 해당 값은 즉시 반환합니다.
'a, b = 0, 1'은 피보나치 수열의 초기 값을 설정합니다. 'a'는 '0'을, 'b'는 '1'을 할당합니다.
'for' 루프는 'range(2, n+1)'에서 생성되는 반복을 수행합니다. 이 반복은 'a, b = b, a + b'를 수행하여 피보나치 수열을 생성합니다.
'print(fib(i+1))'는 'fib(n)' 함수 호출 결과를 출력합니다.
3. 코드 실행
위 코드를 실행하기 위해 파이썬 인터프레터에서 다음과 같이 실행할 수 있습니다.
위 코드를 실행하면, 사용자는 '피보나치 수열을 얼마나 생성할지 입력하세요:'를 입력할 수 있습니다. 이 입력은 'n'에 할당됩니다. 'n'은 코드가 피보나치 수열을 생성하는 횟수를 결정하는 변수입니다. 사용자는 이 횟수를 입력하여 피보나치 수열을 생성할 수 있습니다.
이상 파이썬 코드를 작성하고 실행하는 방법을 설명했습니다.
### More samples
> I used same system prompt, but you could change on your own.
| [
"## Llama-3-KoEn-8B-Instruct-preview\n\n> Update @ 2024.05.01: Pre-Release Llama-3-KoEn-8B model & Llama-3-KoEn-8B-Instruct-preview",
"## Model Details\n\nLlama-3-KoEn-8B-Instruct-preview\n\nLlama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.\n\nThe train was done on TPUv4-256, with the warm support from TRC program by Google.\n\nWith applying the idea from Chat Vector paper,\nI released Instruction model named Llama-3-KoEn-8B-Instruct-preview.\n\nSince it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models.\n\nSample usage\n\n\n\nSample Output\n \n 피보나치 수열은 이탈리아 수학자 레오나르도 피보나치의 이름을 따서 유래된 수열입니다. 피보나치 수열은 특정 조건에 따라 얻어지는 등차수열로, 다음 수를 구하는 단순한 규칙에 의해 생성되는 수열입니다.\n \n 1, 1, 2, 3, 5, 8, 13, 21,... (등차 수열)\n \n 이러한 수열은 자연 세계에도 많이 발견됩니다. 예를 들어, 새싹이 자라는 방식, 달걀이 부화되는 횟수, 코끼리의 생장 등이 피보나치 수열을 따르기 때문입니다. \n \n 지금부터는 파이썬 코드를 작성해 볼까요? \n \n 먼저, 파이썬 코드를 작성하기 위해 필요한 모듈은 없습니다. 저희는 피보나치 수열을 생성하는 알고리즘을 작성할 것입니다.\n \n 1. 파이썬 코드 생성\n 다음은 피보나치 수열을 생성하는 코드입니다. 이 코드는 'while' 루프를 사용하여 피보나치 수열을 생성합니다.\n \n \n 2. 코드 설명\n 위 코드는 'fib(n)' 함수를 정의합니다. 이 함수는 'n'번째 피보나치 수를 반환합니다.\n \n 'fib(n)' 함수는 'if' 조건문을 사용하여 'n'이 0 또는 1일 경우 해당 값은 즉시 반환합니다.\n \n 'a, b = 0, 1'은 피보나치 수열의 초기 값을 설정합니다. 'a'는 '0'을, 'b'는 '1'을 할당합니다.\n \n 'for' 루프는 'range(2, n+1)'에서 생성되는 반복을 수행합니다. 이 반복은 'a, b = b, a + b'를 수행하여 피보나치 수열을 생성합니다.\n \n 'print(fib(i+1))'는 'fib(n)' 함수 호출 결과를 출력합니다.\n \n 3. 코드 실행\n 위 코드를 실행하기 위해 파이썬 인터프레터에서 다음과 같이 실행할 수 있습니다.\n \n 위 코드를 실행하면, 사용자는 '피보나치 수열을 얼마나 생성할지 입력하세요:'를 입력할 수 있습니다. 이 입력은 'n'에 할당됩니다. 'n'은 코드가 피보나치 수열을 생성하는 횟수를 결정하는 변수입니다. 사용자는 이 횟수를 입력하여 피보나치 수열을 생성할 수 있습니다.\n \n 이상 파이썬 코드를 작성하고 실행하는 방법을 설명했습니다.",
"### More samples\n\n> I used same system prompt, but you could change on your own."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #facebook #meta #pytorch #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Llama-3-KoEn-8B-Instruct-preview\n\n> Update @ 2024.05.01: Pre-Release Llama-3-KoEn-8B model & Llama-3-KoEn-8B-Instruct-preview",
"## Model Details\n\nLlama-3-KoEn-8B-Instruct-preview\n\nLlama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.\n\nThe train was done on TPUv4-256, with the warm support from TRC program by Google.\n\nWith applying the idea from Chat Vector paper,\nI released Instruction model named Llama-3-KoEn-8B-Instruct-preview.\n\nSince it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models.\n\nSample usage\n\n\n\nSample Output\n \n 피보나치 수열은 이탈리아 수학자 레오나르도 피보나치의 이름을 따서 유래된 수열입니다. 피보나치 수열은 특정 조건에 따라 얻어지는 등차수열로, 다음 수를 구하는 단순한 규칙에 의해 생성되는 수열입니다.\n \n 1, 1, 2, 3, 5, 8, 13, 21,... (등차 수열)\n \n 이러한 수열은 자연 세계에도 많이 발견됩니다. 예를 들어, 새싹이 자라는 방식, 달걀이 부화되는 횟수, 코끼리의 생장 등이 피보나치 수열을 따르기 때문입니다. \n \n 지금부터는 파이썬 코드를 작성해 볼까요? \n \n 먼저, 파이썬 코드를 작성하기 위해 필요한 모듈은 없습니다. 저희는 피보나치 수열을 생성하는 알고리즘을 작성할 것입니다.\n \n 1. 파이썬 코드 생성\n 다음은 피보나치 수열을 생성하는 코드입니다. 이 코드는 'while' 루프를 사용하여 피보나치 수열을 생성합니다.\n \n \n 2. 코드 설명\n 위 코드는 'fib(n)' 함수를 정의합니다. 이 함수는 'n'번째 피보나치 수를 반환합니다.\n \n 'fib(n)' 함수는 'if' 조건문을 사용하여 'n'이 0 또는 1일 경우 해당 값은 즉시 반환합니다.\n \n 'a, b = 0, 1'은 피보나치 수열의 초기 값을 설정합니다. 'a'는 '0'을, 'b'는 '1'을 할당합니다.\n \n 'for' 루프는 'range(2, n+1)'에서 생성되는 반복을 수행합니다. 이 반복은 'a, b = b, a + b'를 수행하여 피보나치 수열을 생성합니다.\n \n 'print(fib(i+1))'는 'fib(n)' 함수 호출 결과를 출력합니다.\n \n 3. 코드 실행\n 위 코드를 실행하기 위해 파이썬 인터프레터에서 다음과 같이 실행할 수 있습니다.\n \n 위 코드를 실행하면, 사용자는 '피보나치 수열을 얼마나 생성할지 입력하세요:'를 입력할 수 있습니다. 이 입력은 'n'에 할당됩니다. 'n'은 코드가 피보나치 수열을 생성하는 횟수를 결정하는 변수입니다. 사용자는 이 횟수를 입력하여 피보나치 수열을 생성할 수 있습니다.\n \n 이상 파이썬 코드를 작성하고 실행하는 방법을 설명했습니다.",
"### More samples\n\n> I used same system prompt, but you could change on your own."
] |
null | null |
## Introduce
Quantizing the [gradientai/Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/Llama-3-8B-Instruct-Gradient-1048k-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:40:45+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
## Introduce
Quantizing the gradientai/Llama-3-8B-Instruct-Gradient-1048k to f16, q2, q3, q4, q5, q6 and q8 with URL.
| [
"## Introduce\n\nQuantizing the gradientai/Llama-3-8B-Instruct-Gradient-1048k to f16, q2, q3, q4, q5, q6 and q8 with URL."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Introduce\n\nQuantizing the gradientai/Llama-3-8B-Instruct-Gradient-1048k to f16, q2, q3, q4, q5, q6 and q8 with URL."
] |
null | null |
## Introduce
Quantizing the [shibing624/llama-3-8b-instruct-262k-chinese](https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/llama-3-8b-instruct-262k-chinese-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:41:02+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
## Introduce
Quantizing the shibing624/llama-3-8b-instruct-262k-chinese to f16, q2, q3, q4, q5, q6 and q8 with URL.
| [
"## Introduce\n\nQuantizing the shibing624/llama-3-8b-instruct-262k-chinese to f16, q2, q3, q4, q5, q6 and q8 with URL."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Introduce\n\nQuantizing the shibing624/llama-3-8b-instruct-262k-chinese to f16, q2, q3, q4, q5, q6 and q8 with URL."
] |
null | null |
## Introduce
Quantizing the [UnicomLLM/Unichat-llama3-Chinese-8B-28K](https://huggingface.co/UnicomLLM/Unichat-llama3-Chinese-8B-28K) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/Unichat-llama3-Chinese-8B-28K-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:41:18+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
## Introduce
Quantizing the UnicomLLM/Unichat-llama3-Chinese-8B-28K to f16, q2, q3, q4, q5, q6 and q8 with URL.
| [
"## Introduce\n\nQuantizing the UnicomLLM/Unichat-llama3-Chinese-8B-28K to f16, q2, q3, q4, q5, q6 and q8 with URL."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Introduce\n\nQuantizing the UnicomLLM/Unichat-llama3-Chinese-8B-28K to f16, q2, q3, q4, q5, q6 and q8 with URL."
] |
null | null |
## Introduce
Quantizing the [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
| {"license": "apache-2.0"} | Monor/Llama-3-8B-Instruct-80K-QLoRA-gguf | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:41:33+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
## Introduce
Quantizing the namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA to f16, q2, q3, q4, q5, q6 and q8 with URL.
| [
"## Introduce\n\nQuantizing the namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA to f16, q2, q3, q4, q5, q6 and q8 with URL."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## Introduce\n\nQuantizing the namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA to f16, q2, q3, q4, q5, q6 and q8 with URL."
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sn666 - bnb 4bits
- Model creator: https://huggingface.co/RobertML/
- Original model: https://huggingface.co/RobertML/sn666/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/RobertML_-_sn666-4bits | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-01T14:42:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
sn666 - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# cleatherbury/rocket-3B-Q6_K-GGUF
This model was converted to GGUF format from [`pansophic/rocket-3B`](https://huggingface.co/pansophic/rocket-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/pansophic/rocket-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cleatherbury/rocket-3B-Q6_K-GGUF --model rocket-3b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cleatherbury/rocket-3B-Q6_K-GGUF --model rocket-3b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m rocket-3b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": "stabilityai/stablelm-3b-4e1t", "model-index": [{"name": "rocket-3b", "results": []}]} | cleatherbury/rocket-3B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:stabilityai/stablelm-3b-4e1t",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-05-01T14:42:35+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #base_model-stabilityai/stablelm-3b-4e1t #license-cc-by-sa-4.0 #region-us
|
# cleatherbury/rocket-3B-Q6_K-GGUF
This model was converted to GGUF format from 'pansophic/rocket-3B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# cleatherbury/rocket-3B-Q6_K-GGUF\nThis model was converted to GGUF format from 'pansophic/rocket-3B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #base_model-stabilityai/stablelm-3b-4e1t #license-cc-by-sa-4.0 #region-us \n",
"# cleatherbury/rocket-3B-Q6_K-GGUF\nThis model was converted to GGUF format from 'pansophic/rocket-3B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-footulcer
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
- Accuracy: 0.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 8 | 0.2488 | 0.8966 |
| 0.5125 | 1.94 | 16 | 0.1675 | 0.9310 |
| 0.2843 | 2.91 | 24 | 0.0679 | 0.9828 |
| 0.1876 | 4.0 | 33 | 0.0452 | 0.9914 |
| 0.1566 | 4.85 | 40 | 0.0389 | 0.9914 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "facebook/deit-base-patch16-224", "model-index": [{"name": "deit-base-patch16-224-finetuned-footulcer", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9913793103448276, "name": "Accuracy"}]}]}]} | Nitish2801/deit-base-patch16-224-finetuned-footulcer | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:43:14+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/deit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| deit-base-patch16-224-finetuned-footulcer
=========================================
This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0452
* Accuracy: 0.9914
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/deit-base-patch16-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |

# Meta-Llama-3-120B-Instruct
Meta-Llama-3-120B-Instruct is a self-merge with [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [10, 30]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [20, 40]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [30, 50]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [40, 60]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [50, 70]
model: meta-llama/Meta-Llama-3-70B-Instruct
- sources:
- layer_range: [60, 80]
model: meta-llama/Meta-Llama-3-70B-Instruct
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Llama-3-120B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "other", "tags": ["merge", "mergekit", "lazymergekit"], "base_model": ["meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama/Meta-Llama-3-70B-Instruct"]} | mlabonne/Meta-Llama-3-120B-Instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:43:27+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #conversational #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!image/jpeg
# Meta-Llama-3-120B-Instruct
Meta-Llama-3-120B-Instruct is a self-merge with meta-llama/Meta-Llama-3-70B-Instruct.
## Configuration
## Usage
| [
"# Meta-Llama-3-120B-Instruct\n\nMeta-Llama-3-120B-Instruct is a self-merge with meta-llama/Meta-Llama-3-70B-Instruct.",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #conversational #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Meta-Llama-3-120B-Instruct\n\nMeta-Llama-3-120B-Instruct is a self-merge with meta-llama/Meta-Llama-3-70B-Instruct.",
"## Configuration",
"## Usage"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | nlpproject/IntentClassification_V3 | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:43:35+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #distilbert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Poojithpoosa/fakenew_model
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1306
- Validation Loss: 0.0752
- Train Accuracy: 0.9813
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 121765, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1150 | 0.0478 | 0.9805 | 0 |
| 0.1348 | 0.0933 | 0.9800 | 1 |
| 0.1306 | 0.0752 | 0.9813 | 2 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "Poojithpoosa/fakenew_model", "results": []}]} | Poojithpoosa/fakenew_model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:43:42+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| Poojithpoosa/fakenew\_model
===========================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.1306
* Validation Loss: 0.0752
* Train Accuracy: 0.9813
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 121765, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 121765, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-google-bert/bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 121765, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Ehsan-Tavan/Generative-AV-Mistral-v0.1-7b | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:44:03+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | keras |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| {"library_name": "keras"} | robpetrosino/apziva-monreader-classifier | null | [
"keras",
"has_space",
"region:us"
] | null | 2024-05-01T14:44:29+00:00 | [] | [] | TAGS
#keras #has_space #region-us
| Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:"
] | [
"TAGS\n#keras #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-finetuned-TS
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 4.3816 |
| No log | 2.0 | 182 | 4.0213 |
| No log | 3.0 | 273 | 4.3209 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2-medium", "model-index": [{"name": "gpt2-medium-finetuned-TS", "results": []}]} | joaohonorato/gpt2-medium-finetuned-TS | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:44:56+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2-medium #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| gpt2-medium-finetuned-TS
========================
This model is a fine-tuned version of openai-community/gpt2-medium on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 4.3209
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.002
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2-medium #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nash_dpo_merge_iter_3
This model is a fine-tuned version of [YYYYYYibo/nash_dpo_merge_iter_2](https://huggingface.co/YYYYYYibo/nash_dpo_merge_iter_2) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5506
- Rewards/chosen: -0.4423
- Rewards/rejected: -0.9695
- Rewards/accuracies: 0.7060
- Rewards/margins: 0.5272
- Logps/rejected: -386.9142
- Logps/chosen: -353.7201
- Logits/rejected: 0.3453
- Logits/chosen: -0.2965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5641 | 0.49 | 100 | 0.5612 | -0.5147 | -0.9965 | 0.7020 | 0.4819 | -389.6182 | -360.9578 | 0.4191 | -0.1955 |
| 0.5515 | 0.98 | 200 | 0.5506 | -0.4423 | -0.9695 | 0.7060 | 0.5272 | -386.9142 | -353.7201 | 0.3453 | -0.2965 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "nash_dpo_merge_iter_3", "results": []}]} | YYYYYYibo/nash_dpo_merge_iter_3 | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T14:45:26+00:00 | [] | [] | TAGS
#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us
| nash\_dpo\_merge\_iter\_3
=========================
This model is a fine-tuned version of YYYYYYibo/nash\_dpo\_merge\_iter\_2 on the updated and the original datasets.
It achieves the following results on the evaluation set:
* Loss: 0.5506
* Rewards/chosen: -0.4423
* Rewards/rejected: -0.9695
* Rewards/accuracies: 0.7060
* Rewards/margins: 0.5272
* Logps/rejected: -386.9142
* Logps/chosen: -353.7201
* Logits/rejected: 0.3453
* Logits/chosen: -0.2965
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-06
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* PEFT 0.7.1
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sn666 - bnb 8bits
- Model creator: https://huggingface.co/RobertML/
- Original model: https://huggingface.co/RobertML/sn666/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/RobertML_-_sn666-8bits | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-05-01T14:46:09+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
sn666 - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["llama-factory"]} | Coconuty/health_v1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:46:38+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #llama-factory #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Waktaverse-Llama-3-KO-8B-Instruct - bnb 8bits
- Model creator: https://huggingface.co/PathFinderKR/
- Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details

Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=40
top_p=0.9
repetition_penalty=1.1
def generate_response(system ,user):
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "다음 지시사항에 대한 응답을 작성해주세요."
user_prompt = "피보나치 수열에 대해 설명해주세요."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다.
예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다.
피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다.
참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
per_device_eval_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
weight_decay=0.1
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
TBD
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## Model Card Authors
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-01T14:48:33+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Waktaverse-Llama-3-KO-8B-Instruct - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details
!image/webp
Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- Developed by: Waktaverse AI
- Model type: Large Language Model
- Language(s) (NLP): Korean, English
- License: Llama3
- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct
## Model Sources
- Repository: GitHub
- Paper :
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
### Example Output
## Training Details
### Training Data
The model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- GPU: NVIDIA GeForce RTX 4080 SUPER
#### Software
- Operating System: Linux
- Deep Learning Framework: Hugging Face Transformers, PyTorch
### Training Details
- Training time: 32 hours
- VRAM usage: 12.8 GB
- GPU power usage: 300 W
Waktaverse-Llama-3
Llama-3
## Model Card Authors
## Model Card Contact
| [
"# Waktaverse-Llama-3-KO-8B-Instruct Model Card",
"## Model Details\n\n!image/webp\nWaktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.\nThis large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. \nIt is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.\n\n- Developed by: Waktaverse AI\n- Model type: Large Language Model\n- Language(s) (NLP): Korean, English\n- License: Llama3\n- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct",
"## Model Sources\n\n- Repository: GitHub\n- Paper :",
"## Uses",
"### Direct Use\n\nThe model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.",
"### Out-of-Scope Use\n\nThis model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. \nMoreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.",
"## Bias, Risks, and Limitations\n\nWhile Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. \nThere is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.",
"## How to Get Started with the Model\n\nYou can run conversational inference using the Transformers Auto classes.\nWe highly recommend that you add Korean system prompt for better output.\nAdjust the hyperparameters as you need.",
"### Example Usage",
"### Example Output",
"## Training Details",
"### Training Data\n\nThe model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.",
"### Training Procedure\n\nThe model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.",
"#### Training Hyperparameters",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Technical Specifications",
"### Compute Infrastructure",
"#### Hardware\n\n- GPU: NVIDIA GeForce RTX 4080 SUPER",
"#### Software\n\n- Operating System: Linux\n- Deep Learning Framework: Hugging Face Transformers, PyTorch",
"### Training Details\n\n- Training time: 32 hours\n- VRAM usage: 12.8 GB\n- GPU power usage: 300 W\n\n\n\nWaktaverse-Llama-3\n\n\n\nLlama-3",
"## Model Card Authors",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Waktaverse-Llama-3-KO-8B-Instruct Model Card",
"## Model Details\n\n!image/webp\nWaktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.\nThis large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. \nIt is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.\n\n- Developed by: Waktaverse AI\n- Model type: Large Language Model\n- Language(s) (NLP): Korean, English\n- License: Llama3\n- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct",
"## Model Sources\n\n- Repository: GitHub\n- Paper :",
"## Uses",
"### Direct Use\n\nThe model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.",
"### Out-of-Scope Use\n\nThis model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. \nMoreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.",
"## Bias, Risks, and Limitations\n\nWhile Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. \nThere is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.",
"## How to Get Started with the Model\n\nYou can run conversational inference using the Transformers Auto classes.\nWe highly recommend that you add Korean system prompt for better output.\nAdjust the hyperparameters as you need.",
"### Example Usage",
"### Example Output",
"## Training Details",
"### Training Data\n\nThe model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.",
"### Training Procedure\n\nThe model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.",
"#### Training Hyperparameters",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Technical Specifications",
"### Compute Infrastructure",
"#### Hardware\n\n- GPU: NVIDIA GeForce RTX 4080 SUPER",
"#### Software\n\n- Operating System: Linux\n- Deep Learning Framework: Hugging Face Transformers, PyTorch",
"### Training Details\n\n- Training time: 32 hours\n- VRAM usage: 12.8 GB\n- GPU power usage: 300 W\n\n\n\nWaktaverse-Llama-3\n\n\n\nLlama-3",
"## Model Card Authors",
"## Model Card Contact"
] |
null | transformers |
# Uploaded model
- **Developed by:** Cognitus-Stuti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | Cognitus-Stuti/llama3-8b-gguf | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:49:39+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: Cognitus-Stuti
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: Cognitus-Stuti\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: Cognitus-Stuti\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | manueldeprada/llama-66M-multisource | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:52:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# framing_classification_longformer_30_augmented_multi
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3585
- Accuracy: 0.6170
- F1: 0.1988
- Precision: 0.2058
- Recall: 0.2476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:------:|:--------:|:------:|:---------------:|:---------:|:------:|
| 1.282 | 1.0 | 7043 | 0.5773 | 0.1691 | 1.5099 | 0.1473 | 0.2326 |
| 2.5135 | 2.0 | 14086 | 0.5455 | 0.1008 | 2.2716 | 0.0779 | 0.1429 |
| 2.384 | 3.0 | 21129 | 0.5455 | 0.1008 | 2.6195 | 0.0779 | 0.1429 |
| 2.5536 | 4.0 | 28172 | 0.5455 | 0.1008 | 2.3823 | 0.0779 | 0.1429 |
| 2.3549 | 5.0 | 35215 | 0.5455 | 0.1008 | 2.3964 | 0.0779 | 0.1429 |
| 2.3181 | 6.0 | 42258 | 0.5455 | 0.1008 | 2.4343 | 0.0779 | 0.1429 |
| 2.4398 | 7.0 | 49301 | 0.5455 | 0.1008 | 2.4609 | 0.0779 | 0.1429 |
| 2.3715 | 8.0 | 56344 | 0.5455 | 0.1008 | 2.4317 | 0.0779 | 0.1429 |
| 2.5554 | 9.0 | 63387 | 0.5455 | 0.1008 | 2.3966 | 0.0779 | 0.1429 |
| 1.3177 | 10.0 | 70430 | 0.5830 | 0.1707 | 1.4776 | 0.1472 | 0.2352 |
| 1.3928 | 11.0 | 77473 | 0.6159 | 0.1750 | 1.5114 | 0.1470 | 0.2348 |
| 1.5202 | 12.0 | 84516 | 0.6159 | 0.1746 | 1.4525 | 0.1465 | 0.2337 |
| 1.4013 | 13.0 | 91559 | 0.5909 | 0.1625 | 1.4524 | 0.1399 | 0.2113 |
| 1.4087 | 14.0 | 98602 | 0.5955 | 0.1736 | 1.4572 | 0.1484 | 0.2385 |
| 2.3755 | 15.0 | 105645 | 0.5727 | 0.1420 | 1.9328 | 0.1193 | 0.1771 |
| 2.2211 | 16.0 | 112688 | 0.5943 | 0.1596 | 1.7707 | 0.1317 | 0.2043 |
| 2.0359 | 17.0 | 119731 | 0.5830 | 0.1506 | 1.9399 | 0.1248 | 0.19 |
| 1.7553 | 18.0 | 126774 | 0.5920 | 0.1580 | 1.8171 | 0.1306 | 0.2026 |
| 1.4321 | 19.0 | 133817 | 0.6125 | 0.1733 | 1.4162 | 0.1462 | 0.2317 |
| 1.4545 | 20.0 | 140860 | 0.6068 | 0.1728 | 1.4446 | 0.1466 | 0.2324 |
| 1.3939 | 21.0 | 147903 | 0.6148 | 0.1747 | 1.4451 | 0.1473 | 0.2345 |
| 1.4333 | 22.0 | 154946 | 0.5841 | 0.1702 | 1.4462 | 0.1474 | 0.2333 |
| 1.3013 | 23.0 | 161989 | 0.6170 | 0.1757 | 1.4099 | 0.1480 | 0.2363 |
| 1.397 | 24.0 | 169032 | 0.6170 | 0.1766 | 1.4181 | 0.1489 | 0.2385 |
| 1.4752 | 25.0 | 176075 | 0.6136 | 0.1727 | 1.3997 | 0.1444 | 0.2297 |
| 1.372 | 26.0 | 183118 | 1.4134 | 0.6170 | 0.1748 | 0.1471 | 0.2340 |
| 1.4563 | 27.0 | 190161 | 1.3920 | 0.6205 | 0.1775 | 0.1492 | 0.2394 |
| 1.3727 | 28.0 | 197204 | 1.3763 | 0.6125 | 0.1737 | 0.1465 | 0.2328 |
| 1.4587 | 29.0 | 204247 | 1.3585 | 0.6170 | 0.1988 | 0.2058 | 0.2476 |
| 1.2723 | 30.0 | 211290 | 1.3586 | 0.6136 | 0.1973 | 0.1967 | 0.2455 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "allenai/longformer-base-4096", "model-index": [{"name": "framing_classification_longformer_30_augmented_multi", "results": []}]} | AriyanH22/framing_classification_longformer_30_augmented_multi | null | [
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:52:21+00:00 | [] | [] | TAGS
#transformers #pytorch #longformer #text-classification #generated_from_trainer #base_model-allenai/longformer-base-4096 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| framing\_classification\_longformer\_30\_augmented\_multi
=========================================================
This model is a fine-tuned version of allenai/longformer-base-4096 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3585
* Accuracy: 0.6170
* F1: 0.1988
* Precision: 0.2058
* Recall: 0.2476
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.32.0.dev0
* Pytorch 2.0.1
* Datasets 2.14.4
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.0.dev0\n* Pytorch 2.0.1\n* Datasets 2.14.4\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #longformer #text-classification #generated_from_trainer #base_model-allenai/longformer-base-4096 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.0.dev0\n* Pytorch 2.0.1\n* Datasets 2.14.4\n* Tokenizers 0.13.3"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | IN4/fast-whisper-v3-LoRA-8bit-epochs-3_num6_ru_kz | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:52:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.00001_withdpo_4iters_bs256_531lr_iter_3
This model is a fine-tuned version of [ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2](https://huggingface.co/ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2", "model-index": [{"name": "0.00001_withdpo_4iters_bs256_531lr_iter_3", "results": []}]} | ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:53:34+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.00001_withdpo_4iters_bs256_531lr_iter_3
This model is a fine-tuned version of ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2 on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| [
"# 0.00001_withdpo_4iters_bs256_531lr_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.00001_withdpo_4iters_bs256_531lr_iter_3\n\nThis model is a fine-tuned version of ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2 on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2"
] |
automatic-speech-recognition | transformers |
# Latvian Whisper small speech recognition model
Trained on combination of:
- Common Voice 17, custom selection of all validated clips, max 1000 clips per speaker
- Fleurs, test+train+validation
Both regular whisper model and CTranslate2 converted version for use with [faster-whisper](https://github.com/SYSTRAN/faster-whisper) as part of [Home Assistant Whisper integration](https://www.home-assistant.io/integrations/whisper/) are available.
To improve speech recognition quality, more data is needed, donate your voice on [Balsu talka](https://balsutalka.lv/) | {"language": ["lv"], "license": "apache-2.0", "tags": ["Whisper"], "metrics": [{"name": "wer", "type": "wer", "value": 9.56}], "pipeline_tag": "automatic-speech-recognition"} | RaivisDejus/whisper-small-lv | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Whisper",
"lv",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-05-01T14:54:03+00:00 | [] | [
"lv"
] | TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #Whisper #lv #license-apache-2.0 #endpoints_compatible #region-us #has_space
|
# Latvian Whisper small speech recognition model
Trained on combination of:
- Common Voice 17, custom selection of all validated clips, max 1000 clips per speaker
- Fleurs, test+train+validation
Both regular whisper model and CTranslate2 converted version for use with faster-whisper as part of Home Assistant Whisper integration are available.
To improve speech recognition quality, more data is needed, donate your voice on Balsu talka | [
"# Latvian Whisper small speech recognition model\n\nTrained on combination of:\n- Common Voice 17, custom selection of all validated clips, max 1000 clips per speaker\n- Fleurs, test+train+validation\n\nBoth regular whisper model and CTranslate2 converted version for use with faster-whisper as part of Home Assistant Whisper integration are available.\n\nTo improve speech recognition quality, more data is needed, donate your voice on Balsu talka"
] | [
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #Whisper #lv #license-apache-2.0 #endpoints_compatible #region-us #has_space \n",
"# Latvian Whisper small speech recognition model\n\nTrained on combination of:\n- Common Voice 17, custom selection of all validated clips, max 1000 clips per speaker\n- Fleurs, test+train+validation\n\nBoth regular whisper model and CTranslate2 converted version for use with faster-whisper as part of Home Assistant Whisper integration are available.\n\nTo improve speech recognition quality, more data is needed, donate your voice on Balsu talka"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PLN_TS
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.7341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 4.6308 |
| No log | 2.0 | 182 | 5.1050 |
| No log | 3.0 | 273 | 5.5102 |
| No log | 4.0 | 364 | 6.2532 |
| No log | 5.0 | 455 | 6.6069 |
| 1.1628 | 6.0 | 546 | 7.0238 |
| 1.1628 | 7.0 | 637 | 7.1553 |
| 1.1628 | 8.0 | 728 | 7.7253 |
| 1.1628 | 9.0 | 819 | 8.2397 |
| 1.1628 | 10.0 | 910 | 8.9225 |
| 0.1611 | 11.0 | 1001 | 9.3999 |
| 0.1611 | 12.0 | 1092 | 9.8062 |
| 0.1611 | 13.0 | 1183 | 10.1804 |
| 0.1611 | 14.0 | 1274 | 10.5743 |
| 0.1611 | 15.0 | 1365 | 10.7341 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2-medium", "model-index": [{"name": "PLN_TS", "results": []}]} | joaohonorato/PLN_TS | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:54:30+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2-medium #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| PLN\_TS
=======
This model is a fine-tuned version of openai-community/gpt2-medium on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 10.7341
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.002
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2-medium #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('blaackjack/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | blaackjack/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-01T14:54:51+00:00 | [] | [] | TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
| [
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] | [
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
null | transformers |
# Uploaded model
- **Developed by:** CodeTriad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | CodeTriad/mistral_base_7754_epoch3_dpo | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T14:55:20+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: CodeTriad
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: CodeTriad\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: CodeTriad\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-7b-text-to-sql", "results": []}]} | Vivekg91/code-llama-7b-text-to-sql | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-01T14:55:50+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
|
# code-llama-7b-text-to-sql
This model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n",
"# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/llama2-7b-hf-chat-lora-v3-turbo17145753578315382_cognitivecomputations-Code-290k-ShareGP | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:56:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: emmermarcell/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | emmermarcell/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-05-01T14:56:31+00:00 | [] | [] | TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: emmermarcell/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
| [
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: emmermarcell/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] | [
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: emmermarcell/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | cilantro9246/3w4w6gu | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T14:57:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Waktaverse-Llama-3-KO-8B-Instruct - GGUF
- Model creator: https://huggingface.co/PathFinderKR/
- Original model: https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q2_K.gguf) | Q2_K | 2.96GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K.gguf) | Q3_K | 3.74GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K.gguf) | Q4_K | 4.58GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K.gguf) | Q5_K | 5.34GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf/blob/main/Waktaverse-Llama-3-KO-8B-Instruct.Q6_K.gguf) | Q6_K | 6.14GB |
Original model description:
---
language:
- ko
- en
license: llama3
library_name: transformers
datasets:
- MarkrAI/KoCommercial-Dataset
---
# Waktaverse-Llama-3-KO-8B-Instruct Model Card
## Model Details

Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
- **Developed by:** Waktaverse AI
- **Model type:** Large Language Model
- **Language(s) (NLP):** Korean, English
- **License:** [Llama3](https://llama.meta.com/llama3/license)
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Model Sources
- **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main)
- **Paper :** [More Information Needed]
## Uses
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
## Bias, Risks, and Limitations
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
## How to Get Started with the Model
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = (
"cuda:0" if torch.cuda.is_available() else # Nvidia GPU
"mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
"cpu"
)
model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device=device,
)
################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.9
top_k=40
top_p=0.9
repetition_penalty=1.1
def generate_response(system ,user):
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).to(device)
outputs = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
num_return_sequences=num_return_sequences,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty
)
return tokenizer.decode(outputs[0], skip_special_tokens=False)
system_prompt = "다음 지시사항에 대한 응답을 작성해주세요."
user_prompt = "피보나치 수열에 대해 설명해주세요."
response = generate_response(system_prompt, user_prompt)
print(response)
```
### Example Output
```python
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|>
피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
피보나치 수열은 수학에서 가장 유명한 수열 중 하나로, 0과 1로 시작하는 숫자들의 모임입니다. 각 숫자는 이전 두 개의 숫자의 합으로 정의되며, 이렇게 계속 반복됩니다. 피보나치 수열은 무한히 커지는데, 첫 번째와 두 번째 항이 모두 0일 수도 있지만 일반적으로는 첫 번째 항이 1이고 두 번째 항이 1입니다.
예를 들어, 0 + 1 = 1, 1 + 1 = 2, 2 + 1 = 3, 3 + 2 = 5, 5 + 3 = 8, 8 + 5 = 13, 13 + 8 = 21, 21 + 13 = 34 등이 있습니다. 이 숫자들을 피보나치 수열이라고 합니다.
피보나치 수열은 다른 수열들과 함께 사용될 때 도움이 됩니다. 예를 들어, 금융 시장에서는 금리 수익률을 나타내기 위해 이 수열이 사용됩니다. 또한 컴퓨터 과학과 컴퓨터 과학에서도 종종 찾을 수 있습니다. 피보나치 수열은 매우 복잡하며 많은 숫자가 나오므로 일반적인 수열처럼 쉽게 구할 수 없습니다. 이 때문에 피보나치 수열은 대수적 함수와 관련이 있으며 수학자들은 이를 연구하고 계산하기 위해 다양한 알고리즘을 개발했습니다.
참고 자료: https://en.wikipedia.org/wiki/Fibonacci_sequence#Properties.<|eot_id|>
```
## Training Details
### Training Data
The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
```python
################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch_dtype
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=False
################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
per_device_eval_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
weight_decay=0.1
################################################################################
# SFT parameters
################################################################################
max_seq_length=1024
packing=True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** NVIDIA GeForce RTX 4080 SUPER
#### Software
- **Operating System:** Linux
- **Deep Learning Framework:** Hugging Face Transformers, PyTorch
### Training Details
- **Training time:** 32 hours
- **VRAM usage:** 12.8 GB
- **GPU power usage:** 300 W
## Citation
**Waktaverse-Llama-3**
```
TBD
```
**Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
## Model Card Authors
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/PathFinderKR_-_Waktaverse-Llama-3-KO-8B-Instruct-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-01T14:59:03+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Waktaverse-Llama-3-KO-8B-Instruct - GGUF
* Model creator: URL
* Original model: URL
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.96GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 3.28GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 3.43GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 3.41GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.52GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.74GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.74GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 4.03GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 4.18GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q4\_0.gguf, Quant method: Q4\_0, Size: 4.34GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 4.38GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 4.37GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.58GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.58GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.78GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q5\_0.gguf, Quant method: Q5\_0, Size: 5.21GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 5.21GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q5\_K.gguf, Quant method: Q5\_K, Size: 5.34GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 5.34GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.65GB
Name: Waktaverse-Llama-3-KO-8B-Instruct.Q6\_K.gguf, Quant method: Q6\_K, Size: 6.14GB
Original model description:
---------------------------
language:
* ko
* en
license: llama3
library\_name: transformers
datasets:
* MarkrAI/KoCommercial-Dataset
---
Waktaverse-Llama-3-KO-8B-Instruct Model Card
============================================
Model Details
-------------
!image/webp
Waktaverse-Llama-3-KO-8B-Instruct is a state-of-the-art Korean language model developed by Waktaverse AI team.
This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks.
It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.
* Developed by: Waktaverse AI
* Model type: Large Language Model
* Language(s) (NLP): Korean, English
* License: Llama3
* Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct
Model Sources
-------------
* Repository: GitHub
* Paper :
Uses
----
### Direct Use
The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.
### Out-of-Scope Use
This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.
Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.
Bias, Risks, and Limitations
----------------------------
While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.
There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.
How to Get Started with the Model
---------------------------------
You can run conversational inference using the Transformers Auto classes.
We highly recommend that you add Korean system prompt for better output.
Adjust the hyperparameters as you need.
### Example Usage
### Example Output
Training Details
----------------
### Training Data
The model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.
### Training Procedure
The model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.
#### Training Hyperparameters
Evaluation
----------
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
Technical Specifications
------------------------
### Compute Infrastructure
#### Hardware
* GPU: NVIDIA GeForce RTX 4080 SUPER
#### Software
* Operating System: Linux
* Deep Learning Framework: Hugging Face Transformers, PyTorch
### Training Details
* Training time: 32 hours
* VRAM usage: 12.8 GB
* GPU power usage: 300 W
Waktaverse-Llama-3
Llama-3
Model Card Authors
------------------
Model Card Contact
------------------
| [
"### Direct Use\n\n\nThe model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.",
"### Out-of-Scope Use\n\n\nThis model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.\nMoreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nWhile Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.\nThere is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nYou can run conversational inference using the Transformers Auto classes.\nWe highly recommend that you add Korean system prompt for better output.\nAdjust the hyperparameters as you need.",
"### Example Usage",
"### Example Output\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nThe model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.",
"### Training Procedure\n\n\nThe model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.",
"#### Training Hyperparameters\n\n\nEvaluation\n----------",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary\n\n\nTechnical Specifications\n------------------------",
"### Compute Infrastructure",
"#### Hardware\n\n\n* GPU: NVIDIA GeForce RTX 4080 SUPER",
"#### Software\n\n\n* Operating System: Linux\n* Deep Learning Framework: Hugging Face Transformers, PyTorch",
"### Training Details\n\n\n* Training time: 32 hours\n* VRAM usage: 12.8 GB\n* GPU power usage: 300 W\n\n\nWaktaverse-Llama-3\n\n\nLlama-3\n\n\nModel Card Authors\n------------------\n\n\nModel Card Contact\n------------------"
] | [
"TAGS\n#gguf #region-us \n",
"### Direct Use\n\n\nThe model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.",
"### Out-of-Scope Use\n\n\nThis model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making.\nMoreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nWhile Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases.\nThere is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.\n\n\nHow to Get Started with the Model\n---------------------------------\n\n\nYou can run conversational inference using the Transformers Auto classes.\nWe highly recommend that you add Korean system prompt for better output.\nAdjust the hyperparameters as you need.",
"### Example Usage",
"### Example Output\n\n\nTraining Details\n----------------",
"### Training Data\n\n\nThe model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.",
"### Training Procedure\n\n\nThe model training used LoRA for computational efficiency. 0.02 billion parameters(0.26% of total parameters) were trained.",
"#### Training Hyperparameters\n\n\nEvaluation\n----------",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary\n\n\nTechnical Specifications\n------------------------",
"### Compute Infrastructure",
"#### Hardware\n\n\n* GPU: NVIDIA GeForce RTX 4080 SUPER",
"#### Software\n\n\n* Operating System: Linux\n* Deep Learning Framework: Hugging Face Transformers, PyTorch",
"### Training Details\n\n\n* Training time: 32 hours\n* VRAM usage: 12.8 GB\n* GPU power usage: 300 W\n\n\nWaktaverse-Llama-3\n\n\nLlama-3\n\n\nModel Card Authors\n------------------\n\n\nModel Card Contact\n------------------"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B - bnb 4bits
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of [Boreas-7B-chat](https://huggingface.co/yhavinga/Boreas-7B-chat)
For more info refer to the readme of the chat model.
| {} | RichardErkhov/yhavinga_-_Boreas-7B-4bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T14:59:28+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Boreas-7B - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of Boreas-7B-chat
For more info refer to the readme of the chat model.
| [
"# Model Card for Boreas-7B\n\nBase model of Boreas-7B-chat\nFor more info refer to the readme of the chat model."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Boreas-7B\n\nBase model of Boreas-7B-chat\nFor more info refer to the readme of the chat model."
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cancerfarore/bert-base-uncased-CancerFarore-Model
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0652
- Train End Logits Accuracy: 0.9800
- Train Start Logits Accuracy: 0.9786
- Validation Loss: 2.5446
- Validation End Logits Accuracy: 0.6075
- Validation Start Logits Accuracy: 0.6041
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.8107 | 0.4921 | 0.4706 | 1.4353 | 0.5224 | 0.5220 | 0 |
| 1.0870 | 0.6675 | 0.6432 | 1.2412 | 0.6071 | 0.6127 | 1 |
| 0.7170 | 0.7809 | 0.7596 | 1.3592 | 0.6071 | 0.5950 | 2 |
| 0.4657 | 0.8583 | 0.8418 | 1.4376 | 0.6266 | 0.6187 | 3 |
| 0.3015 | 0.9095 | 0.8967 | 1.7133 | 0.6289 | 0.6233 | 4 |
| 0.2080 | 0.9388 | 0.9279 | 2.0004 | 0.6127 | 0.5999 | 5 |
| 0.1521 | 0.9534 | 0.9488 | 2.0970 | 0.6157 | 0.6067 | 6 |
| 0.1054 | 0.9666 | 0.9650 | 2.3507 | 0.6187 | 0.6120 | 7 |
| 0.0850 | 0.9741 | 0.9728 | 2.5902 | 0.5977 | 0.5977 | 8 |
| 0.0652 | 0.9800 | 0.9786 | 2.5446 | 0.6075 | 0.6041 | 9 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "cancerfarore/bert-base-uncased-CancerFarore-Model", "results": []}]} | cancerfarore/bert-base-uncased-CancerFarore-Model | null | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:01:01+00:00 | [] | [] | TAGS
#transformers #tf #bert #question-answering #generated_from_keras_callback #base_model-google-bert/bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
| cancerfarore/bert-base-uncased-CancerFarore-Model
=================================================
This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0652
* Train End Logits Accuracy: 0.9800
* Train Start Logits Accuracy: 0.9786
* Validation Loss: 2.5446
* Validation End Logits Accuracy: 0.6075
* Validation Start Logits Accuracy: 0.6041
* Epoch: 9
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 18960, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.40.1
* TensorFlow 2.15.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 18960, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tf #bert #question-answering #generated_from_keras_callback #base_model-google-bert/bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 18960, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Zangs3011/gemma2b_finetuned_awq | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T15:01:59+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #opt #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #opt #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | barsbold/diplom-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:02:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MohammadKarami/whole-roBERTa | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:02:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - mrtuandao/dreambooth-LoRA-tuan-without-prior
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true, "instance_prompt": "a photo of SKS person"} | mrtuandao/dreambooth-LoRA-tuan-without-prior | null | [
"diffusers",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-01T15:03:26+00:00 | [] | [] | TAGS
#diffusers #text-to-image #lora #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us
|
# LoRA DreamBooth - mrtuandao/dreambooth-LoRA-tuan-without-prior
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth. You can find some example images in the following.
!img_0
!img_1
!img_2
!img_3
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# LoRA DreamBooth - mrtuandao/dreambooth-LoRA-tuan-without-prior\n\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nLoRA for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #text-to-image #lora #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us \n",
"# LoRA DreamBooth - mrtuandao/dreambooth-LoRA-tuan-without-prior\n\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nLoRA for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model31 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:03:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | profoz/parent_malicious_model_onnx | null | [
"transformers",
"onnx",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:03:53+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #onnx #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #onnx #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Please use the ChatML prompt template, or download the following config file for LMStudio [HERE](https://huggingface.co/qnguyen3/14b-gguf/blob/main/chatml_viet.json) | {} | qnguyen3/14b-gguf-arxiv | null | [
"gguf",
"region:us"
] | null | 2024-05-01T15:05:25+00:00 | [] | [] | TAGS
#gguf #region-us
| Please use the ChatML prompt template, or download the following config file for LMStudio HERE | [] | [
"TAGS\n#gguf #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B - bnb 8bits
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of [Boreas-7B-chat](https://huggingface.co/yhavinga/Boreas-7B-chat)
For more info refer to the readme of the chat model.
| {} | RichardErkhov/yhavinga_-_Boreas-7B-8bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-01T15:05:45+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Boreas-7B - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of Boreas-7B-chat
For more info refer to the readme of the chat model.
| [
"# Model Card for Boreas-7B\n\nBase model of Boreas-7B-chat\nFor more info refer to the readme of the chat model."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Model Card for Boreas-7B\n\nBase model of Boreas-7B-chat\nFor more info refer to the readme of the chat model."
] |
null | null | <p align="center">
<h1>DonuRVCFork 💖</h1>
</p>
🎉 Добро пожаловать!! 🎉
Этот проект использует различные библиотеки и модули для создания графического интерфейса пользователя (GUI) для преобразования голоса.
На данный момент это самый быстрый и простой способ познакомиться с RVC!
В основном предназначен для локальных пользователей. 🖥️
## 🖥️ Использование 🖥️
Как только проект будет завершен и доступен для установки, здесь будут представлены подробные инструкции по использованию приложения.
Они будут включать в себя шаги по настройке приложения, запуску приложения и использованию различных функций приложения. 🌐
## 🌟 Особенности 🌟
RVC предлагает ряд функций, включая:
- 🎙️ **Преобразование аудио в желаемую модель голоса**:
С помощью RVC вы можете преобразовать любой звук, используя ту модель голоса, которую вы предпочитаете. Это все равно что иметь под рукой личного художника по озвучиванию.
- ⚡ **Быстрый вывод и обучение**:
Благодаря оптимизации кода и использованию передового оборудования RVC сможет выполнять вывод и обучение модели за рекордно короткое время.
Это сэкономит ваше драгоценное время и позволит сосредоточиться на том, что действительно важно.
- 💾 **Скачать голосовую модель прямо из интерфейса**:
Вы можете напрямую загружать модели, не используя другой интерфейс, насколько это удобно?
- 🔄 **Автоматический импорт моделей**:
Больше не нужно загружать модели вручную. Благодаря автоматическому импорту RVC сможет обнаружить и импортировать ваши модели, как только они станут доступны.
- 🚀 **Дополнительные и передовые возможности конвертации**:
RVC предлагает варианты конвертации, которые находятся на переднем крае искусственного интеллекта. Вы можете адаптировать свой опыт к вашим конкретным потребностям.
- 🧠 **Обучение пользовательским моделям**:
С помощью RVC вы можете обучать свои собственные модели голоса. Это даст вам еще больше контроля над качеством и характеристиками генерируемого голоса.
- 🛠️ **Постоянно обновляется.**:
RVC - это продукт, находящийся в постоянном развитии. Команда инженеров постоянно работает над улучшением и обновлением системы.
- 🗣️ **Выбор из 3 различных моделей TTS, включая Edge TTS**:
С RVC вы избалованы выбором. Вы можете выбрать одну из трех различных моделей синтеза голоса, включая Edge TTS.
- ✔️ **Простота использования для неопытных пользователей**:
Не волнуйтесь, если вы не разбираетесь в технологиях. RVC разработан так, чтобы им могли легко пользоваться все, независимо от уровня их опыта.
| {} | NeuroDonu/donuvc | null | [
"region:us"
] | null | 2024-05-01T15:11:08+00:00 | [] | [] | TAGS
#region-us
| <p align="center">
<h1>DonuRVCFork </h1>
</p>
Добро пожаловать!!
Этот проект использует различные библиотеки и модули для создания графического интерфейса пользователя (GUI) для преобразования голоса.
На данный момент это самый быстрый и простой способ познакомиться с RVC!
В основном предназначен для локальных пользователей. ️
## ️ Использование ️
Как только проект будет завершен и доступен для установки, здесь будут представлены подробные инструкции по использованию приложения.
Они будут включать в себя шаги по настройке приложения, запуску приложения и использованию различных функций приложения.
## Особенности
RVC предлагает ряд функций, включая:
- ️ Преобразование аудио в желаемую модель голоса:
С помощью RVC вы можете преобразовать любой звук, используя ту модель голоса, которую вы предпочитаете. Это все равно что иметь под рукой личного художника по озвучиванию.
- Быстрый вывод и обучение:
Благодаря оптимизации кода и использованию передового оборудования RVC сможет выполнять вывод и обучение модели за рекордно короткое время.
Это сэкономит ваше драгоценное время и позволит сосредоточиться на том, что действительно важно.
- Скачать голосовую модель прямо из интерфейса:
Вы можете напрямую загружать модели, не используя другой интерфейс, насколько это удобно?
- Автоматический импорт моделей:
Больше не нужно загружать модели вручную. Благодаря автоматическому импорту RVC сможет обнаружить и импортировать ваши модели, как только они станут доступны.
- Дополнительные и передовые возможности конвертации:
RVC предлагает варианты конвертации, которые находятся на переднем крае искусственного интеллекта. Вы можете адаптировать свой опыт к вашим конкретным потребностям.
- Обучение пользовательским моделям:
С помощью RVC вы можете обучать свои собственные модели голоса. Это даст вам еще больше контроля над качеством и характеристиками генерируемого голоса.
- ️ Постоянно обновляется.:
RVC - это продукт, находящийся в постоянном развитии. Команда инженеров постоянно работает над улучшением и обновлением системы.
- ️ Выбор из 3 различных моделей TTS, включая Edge TTS:
С RVC вы избалованы выбором. Вы можете выбрать одну из трех различных моделей синтеза голоса, включая Edge TTS.
- ️ Простота использования для неопытных пользователей:
Не волнуйтесь, если вы не разбираетесь в технологиях. RVC разработан так, чтобы им могли легко пользоваться все, независимо от уровня их опыта.
| [
"## ️ Использование ️\r\n\r\nКак только проект будет завершен и доступен для установки, здесь будут представлены подробные инструкции по использованию приложения.\r\nОни будут включать в себя шаги по настройке приложения, запуску приложения и использованию различных функций приложения.",
"## Особенности \r\n\r\nRVC предлагает ряд функций, включая:\r\n\r\n- ️ Преобразование аудио в желаемую модель голоса: \r\nС помощью RVC вы можете преобразовать любой звук, используя ту модель голоса, которую вы предпочитаете. Это все равно что иметь под рукой личного художника по озвучиванию.\r\n\r\n- Быстрый вывод и обучение: \r\nБлагодаря оптимизации кода и использованию передового оборудования RVC сможет выполнять вывод и обучение модели за рекордно короткое время.\r\nЭто сэкономит ваше драгоценное время и позволит сосредоточиться на том, что действительно важно.\r\n\r\n- Скачать голосовую модель прямо из интерфейса: \r\nВы можете напрямую загружать модели, не используя другой интерфейс, насколько это удобно?\r\n\r\n- Автоматический импорт моделей: \r\nБольше не нужно загружать модели вручную. Благодаря автоматическому импорту RVC сможет обнаружить и импортировать ваши модели, как только они станут доступны.\r\n\r\n- Дополнительные и передовые возможности конвертации: \r\nRVC предлагает варианты конвертации, которые находятся на переднем крае искусственного интеллекта. Вы можете адаптировать свой опыт к вашим конкретным потребностям.\r\n\r\n- Обучение пользовательским моделям: \r\nС помощью RVC вы можете обучать свои собственные модели голоса. Это даст вам еще больше контроля над качеством и характеристиками генерируемого голоса.\r\n\r\n- ️ Постоянно обновляется.: \r\nRVC - это продукт, находящийся в постоянном развитии. Команда инженеров постоянно работает над улучшением и обновлением системы.\r\n\r\n- ️ Выбор из 3 различных моделей TTS, включая Edge TTS: \r\nС RVC вы избалованы выбором. Вы можете выбрать одну из трех различных моделей синтеза голоса, включая Edge TTS.\r\n\r\n- ️ Простота использования для неопытных пользователей: \r\nНе волнуйтесь, если вы не разбираетесь в технологиях. RVC разработан так, чтобы им могли легко пользоваться все, независимо от уровня их опыта."
] | [
"TAGS\n#region-us \n",
"## ️ Использование ️\r\n\r\nКак только проект будет завершен и доступен для установки, здесь будут представлены подробные инструкции по использованию приложения.\r\nОни будут включать в себя шаги по настройке приложения, запуску приложения и использованию различных функций приложения.",
"## Особенности \r\n\r\nRVC предлагает ряд функций, включая:\r\n\r\n- ️ Преобразование аудио в желаемую модель голоса: \r\nС помощью RVC вы можете преобразовать любой звук, используя ту модель голоса, которую вы предпочитаете. Это все равно что иметь под рукой личного художника по озвучиванию.\r\n\r\n- Быстрый вывод и обучение: \r\nБлагодаря оптимизации кода и использованию передового оборудования RVC сможет выполнять вывод и обучение модели за рекордно короткое время.\r\nЭто сэкономит ваше драгоценное время и позволит сосредоточиться на том, что действительно важно.\r\n\r\n- Скачать голосовую модель прямо из интерфейса: \r\nВы можете напрямую загружать модели, не используя другой интерфейс, насколько это удобно?\r\n\r\n- Автоматический импорт моделей: \r\nБольше не нужно загружать модели вручную. Благодаря автоматическому импорту RVC сможет обнаружить и импортировать ваши модели, как только они станут доступны.\r\n\r\n- Дополнительные и передовые возможности конвертации: \r\nRVC предлагает варианты конвертации, которые находятся на переднем крае искусственного интеллекта. Вы можете адаптировать свой опыт к вашим конкретным потребностям.\r\n\r\n- Обучение пользовательским моделям: \r\nС помощью RVC вы можете обучать свои собственные модели голоса. Это даст вам еще больше контроля над качеством и характеристиками генерируемого голоса.\r\n\r\n- ️ Постоянно обновляется.: \r\nRVC - это продукт, находящийся в постоянном развитии. Команда инженеров постоянно работает над улучшением и обновлением системы.\r\n\r\n- ️ Выбор из 3 различных моделей TTS, включая Edge TTS: \r\nС RVC вы избалованы выбором. Вы можете выбрать одну из трех различных моделей синтеза голоса, включая Edge TTS.\r\n\r\n- ️ Простота использования для неопытных пользователей: \r\nНе волнуйтесь, если вы не разбираетесь в технологиях. RVC разработан так, чтобы им могли легко пользоваться все, независимо от уровня их опыта."
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - kirillgoltsman/dreambooth
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "a photo of sks dog"} | kirillgoltsman/dreambooth | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-05-01T15:11:37+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - kirillgoltsman/dreambooth
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using DreamBooth.
You can find some example images in the following.
!img_0
!img_1
!img_2
!img_3
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - kirillgoltsman/dreambooth\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - kirillgoltsman/dreambooth\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation | transformers |
# Uploaded model
- **Developed by:** herisan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | herisan/llama-3-8b_mental_health_counseling_conversations | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:11:44+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: herisan
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: herisan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: herisan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeT5-small-without-lora-with-prompt
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1708 | 1.0 | 4383 | 0.9260 |
| 1.0645 | 2.0 | 8766 | 0.8791 |
| 1.0192 | 3.0 | 13149 | 0.8537 |
| 1.0103 | 4.0 | 17532 | 0.8397 |
| 0.9855 | 5.0 | 21915 | 0.8393 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "Salesforce/codet5-small", "model-index": [{"name": "codeT5-small-without-lora-with-prompt", "results": []}]} | EEsu/codeT5-adam-trial | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:12:07+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Salesforce/codet5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| codeT5-small-without-lora-with-prompt
=====================================
This model is a fine-tuned version of Salesforce/codet5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8393
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 12
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-Salesforce/codet5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | saransh03sharma/mintrec2-llama-2-7b-50 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:13:33+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gen-z-translate-llama-3-instruct
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "gen-z-translate-llama-3-instruct", "results": []}]} | llm-wizard/gen-z-translate-llama-3-instruct | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-05-01T15:16:22+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
|
# gen-z-translate-llama-3-instruct
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# gen-z-translate-llama-3-instruct\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n",
"# gen-z-translate-llama-3-instruct\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 4\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | awad9201/llama3-alpaca-dataset | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:16:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B - GGUF
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Boreas-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Boreas-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Boreas-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Boreas-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Boreas-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Boreas-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Boreas-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Boreas-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Boreas-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Boreas-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Boreas-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Boreas-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Boreas-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Boreas-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Boreas-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Boreas-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Boreas-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Boreas-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Boreas-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Boreas-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Boreas-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of [Boreas-7B-chat](https://huggingface.co/yhavinga/Boreas-7B-chat)
For more info refer to the readme of the chat model.
| {} | RichardErkhov/yhavinga_-_Boreas-7B-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-01T15:17:01+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Boreas-7B - GGUF
* Model creator: URL
* Original model: URL
Name: Boreas-7B.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.53GB
Name: Boreas-7B.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.81GB
Name: Boreas-7B.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.96GB
Name: Boreas-7B.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.95GB
Name: Boreas-7B.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.06GB
Name: Boreas-7B.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.28GB
Name: Boreas-7B.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.28GB
Name: Boreas-7B.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.56GB
Name: Boreas-7B.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.67GB
Name: Boreas-7B.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.83GB
Name: Boreas-7B.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.87GB
Name: Boreas-7B.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.86GB
Name: Boreas-7B.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.07GB
Name: Boreas-7B.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.07GB
Name: Boreas-7B.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.24GB
Name: Boreas-7B.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.65GB
Name: Boreas-7B.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.65GB
Name: Boreas-7B.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.78GB
Name: Boreas-7B.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.78GB
Name: Boreas-7B.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.07GB
Name: Boreas-7B.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.53GB
Original model description:
---------------------------
library\_name: transformers
tags: []
------------------------------------
Model Card for Boreas-7B
========================
Base model of Boreas-7B-chat
For more info refer to the readme of the chat model.
| [] | [
"TAGS\n#gguf #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ai-maker-space/gen-z-translate-llama-3-instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:17:54+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | setfit |
# SetFit Aspect Model
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [pahri/setfit-indo-resto-RM-ibu-imas-aspect](https://huggingface.co/pahri/setfit-indo-resto-RM-ibu-imas-aspect)
- **SetFitABSA Polarity Model:** [pahri/setfit-indo-resto-RM-ibu-imas-polarity](https://huggingface.co/pahri/setfit-indo-resto-RM-ibu-imas-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| no aspect | <ul><li>'ambel leuncanya:ambel leuncanya enak terus pedesss'</li><li>'Warung Sunda:Warung Sunda murah meriah dan makanannya enak. Favorit selada air krispi dan ayam bakar'</li><li>'makanannya:Warung Sunda murah meriah dan makanannya enak. Favorit selada air krispi dan ayam bakar'</li></ul> |
| aspect | <ul><li>'ayam bakar:Warung Sunda murah meriah dan makanannya enak. Favorit selada air krispi dan ayam bakar'</li><li>'Ayam bakar:Ayam bakar,sambel leunca sambel terasi merah enak banget 9/10, perkedel jagung 8/10 makan pakai sambel mantap. Makan berdua sekitar 77k'</li><li>'sambel terasi merah:Ayam bakar,sambel leunca sambel terasi merah enak banget 9/10, perkedel jagung 8/10 makan pakai sambel mantap. Makan berdua sekitar 77k'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8063 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"pahri/setfit-indo-resto-RM-ibu-imas-aspect",
"pahri/setfit-indo-resto-RM-ibu-imas-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 37.7180 | 93 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 371 |
| aspect | 51 |
### Training Hyperparameters
- batch_size: (6, 6)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.4225 | - |
| 0.0021 | 50 | 0.2528 | - |
| 0.0043 | 100 | 0.3611 | - |
| 0.0064 | 150 | 0.2989 | - |
| 0.0085 | 200 | 0.2907 | - |
| 0.0107 | 250 | 0.1609 | - |
| 0.0128 | 300 | 0.3534 | - |
| 0.0149 | 350 | 0.1294 | - |
| 0.0171 | 400 | 0.2797 | - |
| 0.0192 | 450 | 0.3119 | - |
| 0.0213 | 500 | 0.004 | - |
| 0.0235 | 550 | 0.1057 | - |
| 0.0256 | 600 | 0.1049 | - |
| 0.0277 | 650 | 0.1601 | - |
| 0.0299 | 700 | 0.151 | - |
| 0.0320 | 750 | 0.1034 | - |
| 0.0341 | 800 | 0.2356 | - |
| 0.0363 | 850 | 0.1335 | - |
| 0.0384 | 900 | 0.0559 | - |
| 0.0405 | 950 | 0.0028 | - |
| 0.0427 | 1000 | 0.1307 | - |
| 0.0448 | 1050 | 0.0049 | - |
| 0.0469 | 1100 | 0.1348 | - |
| 0.0491 | 1150 | 0.0392 | - |
| 0.0512 | 1200 | 0.054 | - |
| 0.0533 | 1250 | 0.0016 | - |
| 0.0555 | 1300 | 0.0012 | - |
| 0.0576 | 1350 | 0.0414 | - |
| 0.0597 | 1400 | 0.1087 | - |
| 0.0618 | 1450 | 0.0464 | - |
| 0.0640 | 1500 | 0.0095 | - |
| 0.0661 | 1550 | 0.0011 | - |
| 0.0682 | 1600 | 0.0002 | - |
| 0.0704 | 1650 | 0.1047 | - |
| 0.0725 | 1700 | 0.001 | - |
| 0.0746 | 1750 | 0.0965 | - |
| 0.0768 | 1800 | 0.0002 | - |
| 0.0789 | 1850 | 0.1436 | - |
| 0.0810 | 1900 | 0.0011 | - |
| 0.0832 | 1950 | 0.001 | - |
| 0.0853 | 2000 | 0.1765 | - |
| 0.0874 | 2050 | 0.1401 | - |
| 0.0896 | 2100 | 0.0199 | - |
| 0.0917 | 2150 | 0.0 | - |
| 0.0938 | 2200 | 0.0023 | - |
| 0.0960 | 2250 | 0.0034 | - |
| 0.0981 | 2300 | 0.0001 | - |
| 0.1002 | 2350 | 0.0948 | - |
| 0.1024 | 2400 | 0.1634 | - |
| 0.1045 | 2450 | 0.0 | - |
| 0.1066 | 2500 | 0.0005 | - |
| 0.1088 | 2550 | 0.0695 | - |
| 0.1109 | 2600 | 0.0 | - |
| 0.1130 | 2650 | 0.0067 | - |
| 0.1152 | 2700 | 0.0025 | - |
| 0.1173 | 2750 | 0.0013 | - |
| 0.1194 | 2800 | 0.1426 | - |
| 0.1216 | 2850 | 0.0001 | - |
| 0.1237 | 2900 | 0.0 | - |
| 0.1258 | 2950 | 0.0 | - |
| 0.1280 | 3000 | 0.0001 | - |
| 0.1301 | 3050 | 0.0001 | - |
| 0.1322 | 3100 | 0.0122 | - |
| 0.1344 | 3150 | 0.0 | - |
| 0.1365 | 3200 | 0.0001 | - |
| 0.1386 | 3250 | 0.0041 | - |
| 0.1408 | 3300 | 0.2549 | - |
| 0.1429 | 3350 | 0.0062 | - |
| 0.1450 | 3400 | 0.0154 | - |
| 0.1472 | 3450 | 0.1776 | - |
| 0.1493 | 3500 | 0.0039 | - |
| 0.1514 | 3550 | 0.0183 | - |
| 0.1536 | 3600 | 0.0045 | - |
| 0.1557 | 3650 | 0.1108 | - |
| 0.1578 | 3700 | 0.0002 | - |
| 0.1600 | 3750 | 0.01 | - |
| 0.1621 | 3800 | 0.0002 | - |
| 0.1642 | 3850 | 0.0001 | - |
| 0.1664 | 3900 | 0.1612 | - |
| 0.1685 | 3950 | 0.0107 | - |
| 0.1706 | 4000 | 0.0548 | - |
| 0.1728 | 4050 | 0.0001 | - |
| 0.1749 | 4100 | 0.0162 | - |
| 0.1770 | 4150 | 0.1294 | - |
| 0.1792 | 4200 | 0.0 | - |
| 0.1813 | 4250 | 0.0032 | - |
| 0.1834 | 4300 | 0.0051 | - |
| 0.1855 | 4350 | 0.0 | - |
| 0.1877 | 4400 | 0.0151 | - |
| 0.1898 | 4450 | 0.0097 | - |
| 0.1919 | 4500 | 0.0002 | - |
| 0.1941 | 4550 | 0.0045 | - |
| 0.1962 | 4600 | 0.0001 | - |
| 0.1983 | 4650 | 0.0001 | - |
| 0.2005 | 4700 | 0.0227 | - |
| 0.2026 | 4750 | 0.0018 | - |
| 0.2047 | 4800 | 0.0 | - |
| 0.2069 | 4850 | 0.0001 | - |
| 0.2090 | 4900 | 0.0 | - |
| 0.2111 | 4950 | 0.0 | - |
| 0.2133 | 5000 | 0.0 | - |
| 0.2154 | 5050 | 0.0002 | - |
| 0.2175 | 5100 | 0.0002 | - |
| 0.2197 | 5150 | 0.0038 | - |
| 0.2218 | 5200 | 0.0 | - |
| 0.2239 | 5250 | 0.0 | - |
| 0.2261 | 5300 | 0.0 | - |
| 0.2282 | 5350 | 0.0028 | - |
| 0.2303 | 5400 | 0.0 | - |
| 0.2325 | 5450 | 0.1146 | - |
| 0.2346 | 5500 | 0.0 | - |
| 0.2367 | 5550 | 0.0073 | - |
| 0.2389 | 5600 | 0.0467 | - |
| 0.2410 | 5650 | 0.0092 | - |
| 0.2431 | 5700 | 0.0196 | - |
| 0.2453 | 5750 | 0.0002 | - |
| 0.2474 | 5800 | 0.0043 | - |
| 0.2495 | 5850 | 0.0378 | - |
| 0.2517 | 5900 | 0.0049 | - |
| 0.2538 | 5950 | 0.0054 | - |
| 0.2559 | 6000 | 0.1757 | - |
| 0.2581 | 6050 | 0.0 | - |
| 0.2602 | 6100 | 0.0001 | - |
| 0.2623 | 6150 | 0.1327 | - |
| 0.2645 | 6200 | 0.0 | - |
| 0.2666 | 6250 | 0.0 | - |
| 0.2687 | 6300 | 0.0 | - |
| 0.2709 | 6350 | 0.0134 | - |
| 0.2730 | 6400 | 0.0001 | - |
| 0.2751 | 6450 | 0.0112 | - |
| 0.2773 | 6500 | 0.0864 | - |
| 0.2794 | 6550 | 0.0 | - |
| 0.2815 | 6600 | 0.0094 | - |
| 0.2837 | 6650 | 0.1358 | - |
| 0.2858 | 6700 | 0.0155 | - |
| 0.2879 | 6750 | 0.0025 | - |
| 0.2901 | 6800 | 0.0002 | - |
| 0.2922 | 6850 | 0.0001 | - |
| 0.2943 | 6900 | 0.2809 | - |
| 0.2965 | 6950 | 0.0 | - |
| 0.2986 | 7000 | 0.0242 | - |
| 0.3007 | 7050 | 0.0015 | - |
| 0.3028 | 7100 | 0.0 | - |
| 0.3050 | 7150 | 0.1064 | - |
| 0.3071 | 7200 | 0.1636 | - |
| 0.3092 | 7250 | 0.267 | - |
| 0.3114 | 7300 | 0.1656 | - |
| 0.3135 | 7350 | 0.0943 | - |
| 0.3156 | 7400 | 0.189 | - |
| 0.3178 | 7450 | 0.0055 | - |
| 0.3199 | 7500 | 0.1286 | - |
| 0.3220 | 7550 | 0.1062 | - |
| 0.3242 | 7600 | 0.1275 | - |
| 0.3263 | 7650 | 0.0101 | - |
| 0.3284 | 7700 | 0.0162 | - |
| 0.3306 | 7750 | 0.0001 | - |
| 0.3327 | 7800 | 0.0001 | - |
| 0.3348 | 7850 | 0.0003 | - |
| 0.3370 | 7900 | 0.0 | - |
| 0.3391 | 7950 | 0.135 | - |
| 0.3412 | 8000 | 0.0 | - |
| 0.3434 | 8050 | 0.0125 | - |
| 0.3455 | 8100 | 0.0004 | - |
| 0.3476 | 8150 | 0.0 | - |
| 0.3498 | 8200 | 0.2229 | - |
| 0.3519 | 8250 | 0.0 | - |
| 0.3540 | 8300 | 0.0051 | - |
| 0.3562 | 8350 | 0.0 | - |
| 0.3583 | 8400 | 0.0001 | - |
| 0.3604 | 8450 | 0.0 | - |
| 0.3626 | 8500 | 0.1261 | - |
| 0.3647 | 8550 | 0.0054 | - |
| 0.3668 | 8600 | 0.1636 | - |
| 0.3690 | 8650 | 0.0036 | - |
| 0.3711 | 8700 | 0.0 | - |
| 0.3732 | 8750 | 0.0027 | - |
| 0.3754 | 8800 | 0.0 | - |
| 0.3775 | 8850 | 0.1422 | - |
| 0.3796 | 8900 | 0.1314 | - |
| 0.3818 | 8950 | 0.003 | - |
| 0.3839 | 9000 | 0.0 | - |
| 0.3860 | 9050 | 0.0092 | - |
| 0.3882 | 9100 | 0.0129 | - |
| 0.3903 | 9150 | 0.0 | - |
| 0.3924 | 9200 | 0.0 | - |
| 0.3946 | 9250 | 0.1659 | - |
| 0.3967 | 9300 | 0.0 | - |
| 0.3988 | 9350 | 0.0 | - |
| 0.4010 | 9400 | 0.0085 | - |
| 0.4031 | 9450 | 0.0 | - |
| 0.4052 | 9500 | 0.0 | - |
| 0.4074 | 9550 | 0.0 | - |
| 0.4095 | 9600 | 0.0112 | - |
| 0.4116 | 9650 | 0.0 | - |
| 0.4138 | 9700 | 0.0154 | - |
| 0.4159 | 9750 | 0.0011 | - |
| 0.4180 | 9800 | 0.0077 | - |
| 0.4202 | 9850 | 0.0064 | - |
| 0.4223 | 9900 | 0.0 | - |
| 0.4244 | 9950 | 0.0 | - |
| 0.4265 | 10000 | 0.0121 | - |
| 0.4287 | 10050 | 0.0 | - |
| 0.4308 | 10100 | 0.0 | - |
| 0.4329 | 10150 | 0.0076 | - |
| 0.4351 | 10200 | 0.0039 | - |
| 0.4372 | 10250 | 0.2153 | - |
| 0.4393 | 10300 | 0.0 | - |
| 0.4415 | 10350 | 0.1218 | - |
| 0.4436 | 10400 | 0.0077 | - |
| 0.4457 | 10450 | 0.1311 | - |
| 0.4479 | 10500 | 0.0 | - |
| 0.4500 | 10550 | 0.0 | - |
| 0.4521 | 10600 | 0.0 | - |
| 0.4543 | 10650 | 0.0041 | - |
| 0.4564 | 10700 | 0.0073 | - |
| 0.4585 | 10750 | 0.0051 | - |
| 0.4607 | 10800 | 0.0 | - |
| 0.4628 | 10850 | 0.0 | - |
| 0.4649 | 10900 | 0.0 | - |
| 0.4671 | 10950 | 0.0001 | - |
| 0.4692 | 11000 | 0.0 | - |
| 0.4713 | 11050 | 0.1696 | - |
| 0.4735 | 11100 | 0.0 | - |
| 0.4756 | 11150 | 0.1243 | - |
| 0.4777 | 11200 | 0.0 | - |
| 0.4799 | 11250 | 0.0 | - |
| 0.4820 | 11300 | 0.0003 | - |
| 0.4841 | 11350 | 0.0707 | - |
| 0.4863 | 11400 | 0.166 | - |
| 0.4884 | 11450 | 0.4964 | - |
| 0.4905 | 11500 | 0.0023 | - |
| 0.4927 | 11550 | 0.0 | - |
| 0.4948 | 11600 | 0.0 | - |
| 0.4969 | 11650 | 0.173 | - |
| 0.4991 | 11700 | 0.0 | - |
| 0.5012 | 11750 | 0.0004 | - |
| 0.5033 | 11800 | 0.0 | - |
| 0.5055 | 11850 | 0.125 | - |
| 0.5076 | 11900 | 0.0042 | - |
| 0.5097 | 11950 | 0.012 | - |
| 0.5119 | 12000 | 0.0046 | - |
| 0.5140 | 12050 | 0.0001 | - |
| 0.5161 | 12100 | 0.0062 | - |
| 0.5183 | 12150 | 0.0 | - |
| 0.5204 | 12200 | 0.017 | - |
| 0.5225 | 12250 | 0.2668 | - |
| 0.5247 | 12300 | 0.0986 | - |
| 0.5268 | 12350 | 0.0071 | - |
| 0.5289 | 12400 | 0.0055 | - |
| 0.5311 | 12450 | 0.006 | - |
| 0.5332 | 12500 | 0.0057 | - |
| 0.5353 | 12550 | 0.0044 | - |
| 0.5375 | 12600 | 0.0039 | - |
| 0.5396 | 12650 | 0.1685 | - |
| 0.5417 | 12700 | 0.125 | - |
| 0.5438 | 12750 | 0.0026 | - |
| 0.5460 | 12800 | 0.0 | - |
| 0.5481 | 12850 | 0.0 | - |
| 0.5502 | 12900 | 0.1024 | - |
| 0.5524 | 12950 | 0.0 | - |
| 0.5545 | 13000 | 0.0 | - |
| 0.5566 | 13050 | 0.0083 | - |
| 0.5588 | 13100 | 0.0 | - |
| 0.5609 | 13150 | 0.0001 | - |
| 0.5630 | 13200 | 0.0 | - |
| 0.5652 | 13250 | 0.095 | - |
| 0.5673 | 13300 | 0.0001 | - |
| 0.5694 | 13350 | 0.0026 | - |
| 0.5716 | 13400 | 0.0 | - |
| 0.5737 | 13450 | 0.0041 | - |
| 0.5758 | 13500 | 0.1654 | - |
| 0.5780 | 13550 | 0.0003 | - |
| 0.5801 | 13600 | 0.0056 | - |
| 0.5822 | 13650 | 0.0 | - |
| 0.5844 | 13700 | 0.1012 | - |
| 0.5865 | 13750 | 0.0 | - |
| 0.5886 | 13800 | 0.0001 | - |
| 0.5908 | 13850 | 0.0042 | - |
| 0.5929 | 13900 | 0.0122 | - |
| 0.5950 | 13950 | 0.1047 | - |
| 0.5972 | 14000 | 0.0 | - |
| 0.5993 | 14050 | 0.0121 | - |
| 0.6014 | 14100 | 0.0 | - |
| 0.6036 | 14150 | 0.0 | - |
| 0.6057 | 14200 | 0.0 | - |
| 0.6078 | 14250 | 0.0105 | - |
| 0.6100 | 14300 | 0.0 | - |
| 0.6121 | 14350 | 0.011 | - |
| 0.6142 | 14400 | 0.0329 | - |
| 0.6164 | 14450 | 0.0942 | - |
| 0.6185 | 14500 | 0.0173 | - |
| 0.6206 | 14550 | 0.0 | - |
| 0.6228 | 14600 | 0.1032 | - |
| 0.6249 | 14650 | 0.016 | - |
| 0.6270 | 14700 | 0.0079 | - |
| 0.6292 | 14750 | 0.0 | - |
| 0.6313 | 14800 | 0.1088 | - |
| 0.6334 | 14850 | 0.0091 | - |
| 0.6356 | 14900 | 0.0039 | - |
| 0.6377 | 14950 | 0.0 | - |
| 0.6398 | 15000 | 0.0 | - |
| 0.6420 | 15050 | 0.0 | - |
| 0.6441 | 15100 | 0.1654 | - |
| 0.6462 | 15150 | 0.0 | - |
| 0.6484 | 15200 | 0.0002 | - |
| 0.6505 | 15250 | 0.0 | - |
| 0.6526 | 15300 | 0.1745 | - |
| 0.6548 | 15350 | 0.0 | - |
| 0.6569 | 15400 | 0.156 | - |
| 0.6590 | 15450 | 0.0 | - |
| 0.6611 | 15500 | 0.0 | - |
| 0.6633 | 15550 | 0.1755 | - |
| 0.6654 | 15600 | 0.008 | - |
| 0.6675 | 15650 | 0.0 | - |
| 0.6697 | 15700 | 0.0 | - |
| 0.6718 | 15750 | 0.0041 | - |
| 0.6739 | 15800 | 0.0037 | - |
| 0.6761 | 15850 | 0.0 | - |
| 0.6782 | 15900 | 0.0 | - |
| 0.6803 | 15950 | 0.0092 | - |
| 0.6825 | 16000 | 0.0071 | - |
| 0.6846 | 16050 | 0.0053 | - |
| 0.6867 | 16100 | 0.0 | - |
| 0.6889 | 16150 | 0.004 | - |
| 0.6910 | 16200 | 0.0036 | - |
| 0.6931 | 16250 | 0.0 | - |
| 0.6953 | 16300 | 0.0 | - |
| 0.6974 | 16350 | 0.184 | - |
| 0.6995 | 16400 | 0.0 | - |
| 0.7017 | 16450 | 0.0133 | - |
| 0.7038 | 16500 | 0.0 | - |
| 0.7059 | 16550 | 0.174 | - |
| 0.7081 | 16600 | 0.0 | - |
| 0.7102 | 16650 | 0.0233 | - |
| 0.7123 | 16700 | 0.0117 | - |
| 0.7145 | 16750 | 0.0272 | - |
| 0.7166 | 16800 | 0.0095 | - |
| 0.7187 | 16850 | 0.0 | - |
| 0.7209 | 16900 | 0.1656 | - |
| 0.7230 | 16950 | 0.0055 | - |
| 0.7251 | 17000 | 0.0 | - |
| 0.7273 | 17050 | 0.1716 | - |
| 0.7294 | 17100 | 0.0 | - |
| 0.7315 | 17150 | 0.0 | - |
| 0.7337 | 17200 | 0.1035 | - |
| 0.7358 | 17250 | 0.0694 | - |
| 0.7379 | 17300 | 0.1733 | - |
| 0.7401 | 17350 | 0.0092 | - |
| 0.7422 | 17400 | 0.1656 | - |
| 0.7443 | 17450 | 0.0 | - |
| 0.7465 | 17500 | 0.1655 | - |
| 0.7486 | 17550 | 0.0059 | - |
| 0.7507 | 17600 | 0.1116 | - |
| 0.7529 | 17650 | 0.0 | - |
| 0.7550 | 17700 | 0.0068 | - |
| 0.7571 | 17750 | 0.0053 | - |
| 0.7593 | 17800 | 0.0 | - |
| 0.7614 | 17850 | 0.0062 | - |
| 0.7635 | 17900 | 0.0104 | - |
| 0.7657 | 17950 | 0.1727 | - |
| 0.7678 | 18000 | 0.0 | - |
| 0.7699 | 18050 | 0.0 | - |
| 0.7721 | 18100 | 0.0 | - |
| 0.7742 | 18150 | 0.0714 | - |
| 0.7763 | 18200 | 0.0 | - |
| 0.7785 | 18250 | 0.0 | - |
| 0.7806 | 18300 | 0.0002 | - |
| 0.7827 | 18350 | 0.0 | - |
| 0.7848 | 18400 | 0.0 | - |
| 0.7870 | 18450 | 0.0996 | - |
| 0.7891 | 18500 | 0.0 | - |
| 0.7912 | 18550 | 0.0 | - |
| 0.7934 | 18600 | 0.0139 | - |
| 0.7955 | 18650 | 0.0 | - |
| 0.7976 | 18700 | 0.1701 | - |
| 0.7998 | 18750 | 0.0 | - |
| 0.8019 | 18800 | 0.0001 | - |
| 0.8040 | 18850 | 0.0 | - |
| 0.8062 | 18900 | 0.0 | - |
| 0.8083 | 18950 | 0.0 | - |
| 0.8104 | 19000 | 0.0 | - |
| 0.8126 | 19050 | 0.0 | - |
| 0.8147 | 19100 | 0.1093 | - |
| 0.8168 | 19150 | 0.0 | - |
| 0.8190 | 19200 | 0.0 | - |
| 0.8211 | 19250 | 0.0075 | - |
| 0.8232 | 19300 | 0.1079 | - |
| 0.8254 | 19350 | 0.0112 | - |
| 0.8275 | 19400 | 0.1655 | - |
| 0.8296 | 19450 | 0.0152 | - |
| 0.8318 | 19500 | 0.1152 | - |
| 0.8339 | 19550 | 0.0 | - |
| 0.8360 | 19600 | 0.0 | - |
| 0.8382 | 19650 | 0.0079 | - |
| 0.8403 | 19700 | 0.0 | - |
| 0.8424 | 19750 | 0.0 | - |
| 0.8446 | 19800 | 0.0 | - |
| 0.8467 | 19850 | 0.0 | - |
| 0.8488 | 19900 | 0.1161 | - |
| 0.8510 | 19950 | 0.0057 | - |
| 0.8531 | 20000 | 0.0 | - |
| 0.8552 | 20050 | 0.0046 | - |
| 0.8574 | 20100 | 0.0 | - |
| 0.8595 | 20150 | 0.0068 | - |
| 0.8616 | 20200 | 0.0 | - |
| 0.8638 | 20250 | 0.0 | - |
| 0.8659 | 20300 | 0.0 | - |
| 0.8680 | 20350 | 0.0 | - |
| 0.8702 | 20400 | 0.0141 | - |
| 0.8723 | 20450 | 0.0001 | - |
| 0.8744 | 20500 | 0.0 | - |
| 0.8766 | 20550 | 0.0 | - |
| 0.8787 | 20600 | 0.0171 | - |
| 0.8808 | 20650 | 0.0 | - |
| 0.8830 | 20700 | 0.0 | - |
| 0.8851 | 20750 | 0.0077 | - |
| 0.8872 | 20800 | 0.0 | - |
| 0.8894 | 20850 | 0.0 | - |
| 0.8915 | 20900 | 0.0 | - |
| 0.8936 | 20950 | 0.0 | - |
| 0.8958 | 21000 | 0.0 | - |
| 0.8979 | 21050 | 0.0 | - |
| 0.9000 | 21100 | 0.0 | - |
| 0.9021 | 21150 | 0.0 | - |
| 0.9043 | 21200 | 0.0 | - |
| 0.9064 | 21250 | 0.1048 | - |
| 0.9085 | 21300 | 0.006 | - |
| 0.9107 | 21350 | 0.0 | - |
| 0.9128 | 21400 | 0.0 | - |
| 0.9149 | 21450 | 0.005 | - |
| 0.9171 | 21500 | 0.0 | - |
| 0.9192 | 21550 | 0.0325 | - |
| 0.9213 | 21600 | 0.0136 | - |
| 0.9235 | 21650 | 0.0 | - |
| 0.9256 | 21700 | 0.0062 | - |
| 0.9277 | 21750 | 0.1656 | - |
| 0.9299 | 21800 | 0.1648 | - |
| 0.9320 | 21850 | 0.0 | - |
| 0.9341 | 21900 | 0.0 | - |
| 0.9363 | 21950 | 0.0 | - |
| 0.9384 | 22000 | 0.2844 | - |
| 0.9405 | 22050 | 0.0 | - |
| 0.9427 | 22100 | 0.0 | - |
| 0.9448 | 22150 | 0.0 | - |
| 0.9469 | 22200 | 0.0 | - |
| 0.9491 | 22250 | 0.0 | - |
| 0.9512 | 22300 | 0.2096 | - |
| 0.9533 | 22350 | 0.0073 | - |
| 0.9555 | 22400 | 0.006 | - |
| 0.9576 | 22450 | 0.0 | - |
| 0.9597 | 22500 | 0.0079 | - |
| 0.9619 | 22550 | 0.0071 | - |
| 0.9640 | 22600 | 0.0 | - |
| 0.9661 | 22650 | 0.006 | - |
| 0.9683 | 22700 | 0.1048 | - |
| 0.9704 | 22750 | 0.007 | - |
| 0.9725 | 22800 | 0.0 | - |
| 0.9747 | 22850 | 0.0 | - |
| 0.9768 | 22900 | 0.007 | - |
| 0.9789 | 22950 | 0.0 | - |
| 0.9811 | 23000 | 0.1049 | - |
| 0.9832 | 23050 | 0.0069 | - |
| 0.9853 | 23100 | 0.0 | - |
| 0.9875 | 23150 | 0.0 | - |
| 0.9896 | 23200 | 0.0 | - |
| 0.9917 | 23250 | 0.0 | - |
| 0.9939 | 23300 | 0.007 | - |
| 0.9960 | 23350 | 0.0147 | - |
| 0.9981 | 23400 | 0.0 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "widget": [{"text": "Suasana:Tempatnya ramai sekali dan ngantei banget. Suasana di dalam resto sangat panas dan padat. Makanannya enak enak."}, {"text": "bener2 pedes puolll:Rasanya sgt gak cocok dilidah gue orang bekasi.. ayamnya ayam kampung sih tp kecil bgt (beli yg dada).. terus tempe bacem sgt padet dan tahunya enak sih.. untuk sambel pedes bgt bener2 pedes puolll, tp rasanya gasukaa."}, {"text": "gang:Suasana di dalam resto sangat panas dan padat. Makanannya enak enak. Dan restonya ada di beberapa tempat dalam satu gang."}, {"text": "tempe:Menu makanannya khas Sunda ada ayam, pepes ikan, babat, tahu, tempe, sayur-sayur. Tidak banyak variasinya tapi kualitas rasanya oke. Saat itu pesen ayam bakar, jukut goreng, tempe sama pepes tahu. Ini semuanya enak (menurut pendapat pribadi)."}, {"text": "babat:Kemaren kebetulan makan babat sama nyobain cumi, buat tekstur babatnya itu engga alot sama sekali dan tidak amis, sedangkan buat cumi utuh lumayan gede juga tekstur kenyel kenyelnya dapet dan mateng juga sampe ke dalem. "}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Aspect Model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.80625, "name": "Accuracy"}]}]}]} | pahri/setfit-indo-resto-RM-ibu-imas-aspect | null | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"model-index",
"region:us"
] | null | 2024-05-01T15:18:12+00:00 | [
"2209.11055"
] | [] | TAGS
#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #model-index #region-us
| SetFit Aspect Model
===================
This is a SetFit model that can be used for Aspect Based Sentiment Analysis (ABSA). A LogisticRegression instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a Sentence Transformer with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use this SetFit model to filter these possible aspect span candidates.
3. Use a SetFit model to classify the filtered aspect span candidates.
Model Details
-------------
### Model Description
* Model Type: SetFit
* Classification head: a LogisticRegression instance
* spaCy Model: id\_core\_news\_trf
* SetFitABSA Aspect Model: pahri/setfit-indo-resto-RM-ibu-imas-aspect
* SetFitABSA Polarity Model: pahri/setfit-indo-resto-RM-ibu-imas-polarity
* Maximum Sequence Length: 512 tokens
* Number of Classes: 2 classes
### Model Sources
* Repository: SetFit on GitHub
* Paper: Efficient Few-Shot Learning Without Prompts
* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
### Model Labels
Evaluation
----------
### Metrics
Uses
----
### Direct Use for Inference
First install the SetFit library:
Then you can load this model and run inference.
Training Details
----------------
### Training Set Metrics
### Training Hyperparameters
* batch\_size: (6, 6)
* num\_epochs: (1, 16)
* max\_steps: -1
* sampling\_strategy: oversampling
* body\_learning\_rate: (2e-05, 1e-05)
* head\_learning\_rate: 0.01
* loss: CosineSimilarityLoss
* distance\_metric: cosine\_distance
* margin: 0.25
* end\_to\_end: False
* use\_amp: True
* warmup\_proportion: 0.1
* seed: 42
* eval\_max\_steps: -1
* load\_best\_model\_at\_end: False
### Training Results
### Framework Versions
* Python: 3.10.13
* SetFit: 1.0.3
* Sentence Transformers: 2.7.0
* spaCy: 3.7.4
* Transformers: 4.36.2
* PyTorch: 2.1.2
* Datasets: 2.18.0
* Tokenizers: 0.15.2
### BibTeX
| [
"### Model Description\n\n\n* Model Type: SetFit\n* Classification head: a LogisticRegression instance\n* spaCy Model: id\\_core\\_news\\_trf\n* SetFitABSA Aspect Model: pahri/setfit-indo-resto-RM-ibu-imas-aspect\n* SetFitABSA Polarity Model: pahri/setfit-indo-resto-RM-ibu-imas-polarity\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (6, 6)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: True\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.13\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.36.2\n* PyTorch: 2.1.2\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] | [
"TAGS\n#setfit #safetensors #bert #absa #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #model-index #region-us \n",
"### Model Description\n\n\n* Model Type: SetFit\n* Classification head: a LogisticRegression instance\n* spaCy Model: id\\_core\\_news\\_trf\n* SetFitABSA Aspect Model: pahri/setfit-indo-resto-RM-ibu-imas-aspect\n* SetFitABSA Polarity Model: pahri/setfit-indo-resto-RM-ibu-imas-polarity\n* Maximum Sequence Length: 512 tokens\n* Number of Classes: 2 classes",
"### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts",
"### Model Labels\n\n\n\nEvaluation\n----------",
"### Metrics\n\n\n\nUses\n----",
"### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------",
"### Training Set Metrics",
"### Training Hyperparameters\n\n\n* batch\\_size: (6, 6)\n* num\\_epochs: (1, 16)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: True\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False",
"### Training Results",
"### Framework Versions\n\n\n* Python: 3.10.13\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* spaCy: 3.7.4\n* Transformers: 4.36.2\n* PyTorch: 2.1.2\n* Datasets: 2.18.0\n* Tokenizers: 0.15.2",
"### BibTeX"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune-test4
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.767 | 0.9956 | 56 | 0.5333 |
| 0.4313 | 1.9911 | 112 | 0.4449 |
| 0.3107 | 2.9867 | 168 | 0.4640 |
| 0.2198 | 4.0 | 225 | 0.5196 |
| 0.1633 | 4.9956 | 281 | 0.5811 |
| 0.1209 | 5.9911 | 337 | 0.6468 |
| 0.0944 | 6.9867 | 393 | 0.6891 |
| 0.0745 | 8.0 | 450 | 0.7297 |
| 0.064 | 8.9956 | 506 | 0.7844 |
| 0.0557 | 9.9911 | 562 | 0.8384 |
| 0.0489 | 10.9867 | 618 | 0.8632 |
| 0.0433 | 12.0 | 675 | 0.9223 |
| 0.0413 | 12.9956 | 731 | 0.9526 |
| 0.0389 | 13.9911 | 787 | 0.9552 |
| 0.0375 | 14.9867 | 843 | 1.0303 |
| 0.0355 | 16.0 | 900 | 1.0489 |
| 0.0355 | 16.9956 | 956 | 1.0804 |
| 0.0347 | 17.9911 | 1012 | 1.0983 |
| 0.0341 | 18.9867 | 1068 | 1.1147 |
| 0.0328 | 19.9111 | 1120 | 1.1223 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.0.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "Finetune-test4", "results": []}]} | AmaanUsmani/Finetune-test4 | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T15:20:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
| Finetune-test4
==============
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1223
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.40.1
* Pytorch 2.0.1+cu118
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/split1 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:24:07+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/split2 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:24:45+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/giveup | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:25:06+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/bertclass | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:25:43+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.