pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["WizardLM/WizardMath-7B-V1.1", "NousResearch/Hermes-2-Pro-Mistral-7B"]}
|
mergekit-community/mergekit-slerp-bzasjyl
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T11:34:33+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* WizardLM/WizardMath-7B-V1.1
* NousResearch/Hermes-2-Pro-Mistral-7B
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-WizardLM/WizardMath-7B-V1.1 #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* WizardLM/WizardMath-7B-V1.1\n* NousResearch/Hermes-2-Pro-Mistral-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
sentence-similarity
|
sentence-transformers
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
Changlin/emb
| null |
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T11:35:39+00:00
|
[] |
[] |
TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "257.04 +/- 30.98", "name": "mean_reward", "verified": false}]}]}]}
|
omsharma24/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T11:37:36+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Moistral-11B-v2](https://huggingface.co/TheDrummer/Moistral-11B-v2)
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: TheDrummer/Moistral-11B-v2
layer_range: [0, 32]
- model: Sao10K/Fimbulvetr-11B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/Fimbulvetr-11B-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["TheDrummer/Moistral-11B-v2", "Sao10K/Fimbulvetr-11B-v2"]}
|
mergekit-community/mergekit-slerp-oskyrzi
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:TheDrummer/Moistral-11B-v2",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T11:37:38+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-TheDrummer/Moistral-11B-v2 #base_model-Sao10K/Fimbulvetr-11B-v2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* TheDrummer/Moistral-11B-v2
* Sao10K/Fimbulvetr-11B-v2
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* TheDrummer/Moistral-11B-v2\n* Sao10K/Fimbulvetr-11B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-TheDrummer/Moistral-11B-v2 #base_model-Sao10K/Fimbulvetr-11B-v2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* TheDrummer/Moistral-11B-v2\n* Sao10K/Fimbulvetr-11B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "confidence_sft_v1_full_merge_1"}
|
ofua/confidence_oair_v1_merged
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:confidence_sft_v1_full_merge_1",
"region:us"
] | null |
2024-04-15T11:38:14+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-confidence_sft_v1_full_merge_1 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
[
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-confidence_sft_v1_full_merge_1 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null |
peft
|
LoRA trained in 4-bit with 8k context using [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/) as the base model for 1 epoch.
Dataset used is [a modified](https://huggingface.co/datasets/mpasila/PIPPA-ShareGPT-formatted-named) version of [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted).
### Prompt format: ChatML
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "not-for-all-audiences"], "datasets": ["mpasila/PIPPA-ShareGPT-formatted-named", "KaraKaraWitch/PIPPA-ShareGPT-formatted"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"}
|
mpasila/PIPPA-Named-LoRA-7B
| null |
[
"peft",
"safetensors",
"text-generation-inference",
"transformers",
"unsloth",
"mistral",
"trl",
"not-for-all-audiences",
"en",
"dataset:mpasila/PIPPA-ShareGPT-formatted-named",
"dataset:KaraKaraWitch/PIPPA-ShareGPT-formatted",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T11:39:23+00:00
|
[] |
[
"en"
] |
TAGS
#peft #safetensors #text-generation-inference #transformers #unsloth #mistral #trl #not-for-all-audiences #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #region-us
|
LoRA trained in 4-bit with 8k context using alpindale/Mistral-7B-v0.2-hf as the base model for 1 epoch.
Dataset used is a modified version of KaraKaraWitch/PIPPA-ShareGPT-formatted.
### Prompt format: ChatML
# Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#peft #safetensors #text-generation-inference #transformers #unsloth #mistral #trl #not-for-all-audiences #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #region-us \n",
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
This is a merge of [mpasila/PIPPA-Named-LoRA-7B](https://huggingface.co/mpasila/PIPPA-Named-LoRA-7B/).
LoRA trained in 4-bit with 8k context using [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/) as the base model for 1 epoch.
Dataset used is [a modified](https://huggingface.co/datasets/mpasila/PIPPA-ShareGPT-formatted-named) version of [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted).
### Prompt format: ChatML
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "not-for-all-audiences"], "datasets": ["mpasila/PIPPA-ShareGPT-formatted-named", "KaraKaraWitch/PIPPA-ShareGPT-formatted"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"}
|
mpasila/PIPPA-Named-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"not-for-all-audiences",
"conversational",
"en",
"dataset:mpasila/PIPPA-ShareGPT-formatted-named",
"dataset:KaraKaraWitch/PIPPA-ShareGPT-formatted",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T11:41:36+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #not-for-all-audiences #conversational #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This is a merge of mpasila/PIPPA-Named-LoRA-7B.
LoRA trained in 4-bit with 8k context using alpindale/Mistral-7B-v0.2-hf as the base model for 1 epoch.
Dataset used is a modified version of KaraKaraWitch/PIPPA-ShareGPT-formatted.
### Prompt format: ChatML
# Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #not-for-all-audiences #conversational #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# CodeQwen1.5-7B-Chat-AWQ
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
Additionally, you need to install [`AutoAWQ`](https://github.com/casper-hansen/AutoAWQ) for the AWQ support.
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/CodeQwen1.5-7B-Chat-AWQ",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/CodeQwen1.5-7B-Chat-AWQ")
prompt = "Write a quicksort algorithm in python."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
{"language": ["en"], "license": "other", "tags": ["chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat-AWQ/blob/main/LICENSE", "pipeline_tag": "text-generation"}
|
Qwen/CodeQwen1.5-7B-Chat-AWQ
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-15T11:42:58+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #qwen2 #text-generation #chat #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# CodeQwen1.5-7B-Chat-AWQ
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.
For more details, please refer to our blog post and GitHub repo.
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:
Additionally, you need to install 'AutoAWQ' for the AWQ support.
## Quickstart
Here provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.
If you find our work helpful, feel free to give us a cite.
|
[
"# CodeQwen1.5-7B-Chat-AWQ",
"## Introduction\n\nCodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. \n\n* Strong code generation capabilities and competitve performance across a series of benchmarks;\n* Supporting long context understanding and generation with the context length of 64K tokens;\n* Supporting 92 coding languages\n* Excellent performance in text-to-SQL, bug fix, etc.\n\n\nFor more details, please refer to our blog post and GitHub repo.",
"## Model Details\nCodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.",
"## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:\n\nAdditionally, you need to install 'AutoAWQ' for the AWQ support.",
"## Quickstart\n\nHere provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.",
"## Tips\n\n* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.\n\n\nIf you find our work helpful, feel free to give us a cite."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #chat #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# CodeQwen1.5-7B-Chat-AWQ",
"## Introduction\n\nCodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. \n\n* Strong code generation capabilities and competitve performance across a series of benchmarks;\n* Supporting long context understanding and generation with the context length of 64K tokens;\n* Supporting 92 coding languages\n* Excellent performance in text-to-SQL, bug fix, etc.\n\n\nFor more details, please refer to our blog post and GitHub repo.",
"## Model Details\nCodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.",
"## Requirements\nThe code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install 'transformers>=4.37.0', or you might encounter the following error:\n\nAdditionally, you need to install 'AutoAWQ' for the AWQ support.",
"## Quickstart\n\nHere provides a code snippet with 'apply_chat_template' to show you how to load the tokenizer and model and how to generate contents.",
"## Tips\n\n* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in 'generation_config.json'.\n\n\nIf you find our work helpful, feel free to give us a cite."
] |
text-generation
|
transformers
|
<div align="center">
<img src="https://huggingface.co/openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
**Eurus: A suite of open-source LLMs optimized for reasoning**
<p align="center">
<a href="#introduction"> Introduction</a> •
<a href="#evaluation">Evaluation</a>
</p>
</div>
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurux-8x22B-NCA is SFT and [NCA](https://arxiv.org/abs/2402.05369) fine-tuned from [Mixtral-8x22B](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co/openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co/openbmb/UltraFeedback).
It achieves superb reasoning performance as well as exellent chat & instruction-following capabilities.
## Evaluation
We conducted overall coding, math, reasoning, knowledge, instruction-following and chat benchmarking. Results are shown below, with the best scores in open-source models **bolded**:
| Models/Benchmarks | Coding | | | Math | | | Reasoning | Knowledge | Ins-Following | Chat |
|-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|:---------:|
| | HumanEval | MBPP | LeetCode | GSMPLUS | MATH | TheoremQA | BBH (CoT) | MMLU | IFEval | MT-Bench |
| GPT-3.5-Turbo | 76.8 | 82.5 | 23.3 | 61.2 | 37.8 | 35.6 | 70.1 | 70.0 | 56.6 | 7.94 |
| GPT-4 | 85.4 | 83.5 | 41.8 | 85.6 | 69.7 | 52.4 | 86.7 | 86.4 | 79.7 | 8.96 |
| Mixtral-8x7B-Ins | 50.6 | 50.1 | 5.6 | 49.6 | 25.9 | 20.4 | 73.5 | 70.3 | 48.8 | 8.30 |
| DS-LM-67B-Chat | 70.7 | 65.7 | 20.0 | 65.0 | 41.0 | 17.9 | 78.9 | 72.3 | 52.7 | 8.35 |
| QWen-1.5-72B | 71.3 | 56.9 | 15.6 | 65.4 | 43.4 | 18.5 | 78.0 | 72.9 | 53.4 | **8.61** |
| Eurus-70b-NCA | **79.3** | **71.9** | 33.3 | 62.8 | 41.7 | 32.6 | 80.0 | 59.4 | 49.2 | 7.54 |
| Eurux-8x22b-KTO | 71.3 | 68.9 | 29.4 | **68.3** | 48.4 | 35.3 | **83.6** | **75.9** | **67.1** | 8.58 |
| Eurux-8x22b-NCA | 75.0 | 69.7 | **35.0** | 68.1 | **49.0** | **35.5** | 83.5 | 75.6 | **67.1** | 8.46 |
## Usage
```python
# pip install 'transformers>=4.39.3'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="openbmb/Eurux-8x22b-nca",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "What does Eurus mean?"},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
print(outputs[0]["generated_text"][-1]["content"])
```
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
**Coding**
```
[INST] Write Python code to solve the task:
{Instruction} [/INST]
```
**Math-CoT**
```
[INST] Solve the following math problem step-by-step.
Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
{Instruction} [/INST]
```
**Math-PoT**
```
[INST] Tool available:
[1] Python interpreter
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
Solve the following math problem step-by-step.
Simplify your answer as much as possible.
{Instruction} [/INST]
```
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
{"license": "apache-2.0", "tags": ["reasoning", "preference_learning", "nca"], "datasets": ["openbmb/UltraInteract_sft", "openbmb/UltraInteract_pair", "openbmb/UltraFeedback"], "pipeline_tag": "text-generation"}
|
openbmb/Eurux-8x22b-nca
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"reasoning",
"preference_learning",
"nca",
"conversational",
"dataset:openbmb/UltraInteract_sft",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"arxiv:2404.02078",
"arxiv:2402.05369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T11:43:06+00:00
|
[
"2404.02078",
"2402.05369"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #reasoning #preference_learning #nca #conversational #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #arxiv-2404.02078 #arxiv-2402.05369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<img src="URL width="200px">
Eurus: A suite of open-source LLMs optimized for reasoning
[Introduction](#introduction) •
[Evaluation](#evaluation)
Links
=====
* Paper
* Eurus Collection
* UltraInteract
* SFT
* Preference Learning
* GitHub Repo
Introduction
============
Eurux-8x22B-NCA is SFT and NCA fine-tuned from Mixtral-8x22B on all multi-turn trajectory pairs in UltraInteract and all pairs in UltraFeedback.
It achieves superb reasoning performance as well as exellent chat & instruction-following capabilities.
Evaluation
----------
We conducted overall coding, math, reasoning, knowledge, instruction-following and chat benchmarking. Results are shown below, with the best scores in open-source models bolded:
Usage
-----
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
Coding
Math-CoT
Math-PoT
|
[] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #reasoning #preference_learning #nca #conversational #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #arxiv-2404.02078 #arxiv-2402.05369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
adapter-transformers
|
# Adapter `jgrc3/RobertaDAPT_adapters_unipelt` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("jgrc3/RobertaDAPT_adapters_unipelt", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
|
jgrc3/RobertaDAPT_adapters_unipelt
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed",
"region:us"
] | null |
2024-04-15T11:43:30+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
|
# Adapter 'jgrc3/RobertaDAPT_adapters_unipelt' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'jgrc3/RobertaDAPT_adapters_unipelt' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n",
"# Adapter 'jgrc3/RobertaDAPT_adapters_unipelt' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
text-generation
|
transformers
|
<div align="center">
<img src="https://huggingface.co/openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
**Eurus: A suite of open-source LLMs optimized for reasoning**
<p align="center">
<a href="#introduction"> Introduction</a> •
<a href="#evaluation">Evaluation</a>
</p>
</div>
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurux-8x22B-KTO is SFT and [KTO](https://arxiv.org/abs/2402.01306) fine-tuned from [Mixtral-8x22B](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co/openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co/openbmb/UltraFeedback).
It achieves superb reasoning performance as well as exellent chat & instruction-following capabilities.
## Evaluation
We conducted overall coding, math, reasoning, knowledge, instruction-following and chat benchmarking. Results are shown below, with the best scores in open-source models **bolded**:
| Models/Benchmarks | Coding | | | Math | | | Reasoning | Knowledge | Ins-Following | Chat |
|-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|:---------:|
| | HumanEval | MBPP | LeetCode | GSMPLUS | MATH | TheoremQA | BBH (CoT) | MMLU | IFEval | MT-Bench |
| GPT-3.5-Turbo | 76.8 | 82.5 | 23.3 | 61.2 | 37.8 | 35.6 | 70.1 | 70.0 | 56.6 | 7.94 |
| GPT-4 | 85.4 | 83.5 | 41.8 | 85.6 | 69.7 | 52.4 | 86.7 | 86.4 | 79.7 | 8.96 |
| Mixtral-8x7B-Ins | 50.6 | 50.1 | 5.6 | 49.6 | 25.9 | 20.4 | 73.5 | 70.3 | 48.8 | 8.30 |
| DS-LM-67B-Chat | 70.7 | 65.7 | 20.0 | 65.0 | 41.0 | 17.9 | 78.9 | 72.3 | 52.7 | 8.35 |
| QWen-1.5-72B | 71.3 | 56.9 | 15.6 | 65.4 | 43.4 | 18.5 | 78.0 | 72.9 | 53.4 | **8.61** |
| Eurus-70b-NCA | **79.3** | **71.9** | 33.3 | 62.8 | 41.7 | 32.6 | 80.0 | 59.4 | 49.2 | 7.54 |
| Eurux-8x22b-KTO | 71.3 | 68.9 | 29.4 | **68.3** | 48.4 | 35.3 | **83.6** | **75.9** | **67.1** | 8.58 |
| Eurux-8x22b-NCA | 75.0 | 69.7 | **35.0** | 68.1 | **49.0** | **35.5** | 83.5 | 75.6 | **67.1** | 8.46 |
## Usage
```python
# pip install 'transformers>=4.39.3'
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="openbmb/Eurux-8x22b-kto",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{"role": "user", "content": "What does Eurus mean?"},
]
outputs = pipe(
messages,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
print(outputs[0]["generated_text"][-1]["content"])
```
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
**Coding**
```
[INST] Write Python code to solve the task:
{Instruction} [/INST]
```
**Math-CoT**
```
[INST] Solve the following math problem step-by-step.
Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
{Instruction} [/INST]
```
**Math-PoT**
```
[INST] Tool available:
[1] Python interpreter
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
Solve the following math problem step-by-step.
Simplify your answer as much as possible.
{Instruction} [/INST]
```
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
{"license": "apache-2.0", "tags": ["reasoning", "preference_learning", "nca"], "datasets": ["openbmb/UltraInteract_sft", "openbmb/UltraInteract_pair", "openbmb/UltraFeedback"], "pipeline_tag": "text-generation"}
|
openbmb/Eurux-8x22b-kto
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"reasoning",
"preference_learning",
"nca",
"dataset:openbmb/UltraInteract_sft",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraFeedback",
"arxiv:2404.02078",
"arxiv:2402.01306",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T11:43:34+00:00
|
[
"2404.02078",
"2402.01306"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #reasoning #preference_learning #nca #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #arxiv-2404.02078 #arxiv-2402.01306 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<img src="URL width="200px">
Eurus: A suite of open-source LLMs optimized for reasoning
[Introduction](#introduction) •
[Evaluation](#evaluation)
Links
=====
* Paper
* Eurus Collection
* UltraInteract
* SFT
* Preference Learning
* GitHub Repo
Introduction
============
Eurux-8x22B-KTO is SFT and KTO fine-tuned from Mixtral-8x22B on all multi-turn trajectory pairs in UltraInteract and all pairs in UltraFeedback.
It achieves superb reasoning performance as well as exellent chat & instruction-following capabilities.
Evaluation
----------
We conducted overall coding, math, reasoning, knowledge, instruction-following and chat benchmarking. Results are shown below, with the best scores in open-source models bolded:
Usage
-----
We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
Coding
Math-CoT
Math-PoT
|
[] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #reasoning #preference_learning #nca #dataset-openbmb/UltraInteract_sft #dataset-openbmb/UltraInteract_pair #dataset-openbmb/UltraFeedback #arxiv-2404.02078 #arxiv-2402.01306 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_lora_r16_2e4_e3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_lora_r16_2e4_e3", "results": []}]}
|
fangzhaoz/mistralv1_lora_r16_2e4_e3
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T11:43:56+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# mistralv1_lora_r16_2e4_e3
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# mistralv1_lora_r16_2e4_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# mistralv1_lora_r16_2e4_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
fangzhaoz/mistralv1_lora_r16_2e4_e3_merged
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T11:44:09+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Resi/donut-docvqa-sagemaker-test
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T11:44:28+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_trained-mental-clm-model
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.6597 | 1.0 | 2261 | 3.5505 |
| 3.554 | 2.0 | 4522 | 3.4949 |
| 3.4943 | 3.0 | 6783 | 3.4667 |
| 3.4392 | 4.0 | 9044 | 3.4530 |
| 3.4115 | 5.0 | 11305 | 3.4443 |
| 3.3814 | 6.0 | 13566 | 3.4367 |
| 3.3565 | 7.0 | 15827 | 3.4331 |
| 3.3478 | 8.0 | 18088 | 3.4296 |
| 3.3239 | 9.0 | 20349 | 3.4299 |
| 3.3266 | 10.0 | 22610 | 3.4286 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilgpt2", "model-index": [{"name": "my_trained-mental-clm-model", "results": []}]}
|
justinandhika/my_trained-mental-clm-model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T11:46:43+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilbert/distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
my\_trained-mental-clm-model
============================
This model is a fine-tuned version of distilbert/distilgpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.4286
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilbert/distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
HeydarS/MiniCPM_witQA_peft_v58
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T11:47:31+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
{"library_name": "peft"}
|
aidiary/finetune-llama-7b-qlora-gozaru
| null |
[
"peft",
"region:us"
] | null |
2024-04-15T11:52:49+00:00
|
[] |
[] |
TAGS
#peft #region-us
|
## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
[
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
[
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.4.0"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Alpha_0.3
This model is a fine-tuned version of [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) on the example dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 12.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "other", "library_name": "peft", "tags": ["llama-factory", "lora", "generated_from_trainer"], "datasets": ["example_dataset"], "base_model": "mlabonne/AlphaMonarch-7B", "model-index": [{"name": "Alpha_0.3", "results": []}]}
|
TanvirMungekar/Alpha_0.3
| null |
[
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"dataset:example_dataset",
"base_model:mlabonne/AlphaMonarch-7B",
"license:other",
"region:us"
] | null |
2024-04-15T11:54:31+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #llama-factory #lora #generated_from_trainer #dataset-example_dataset #base_model-mlabonne/AlphaMonarch-7B #license-other #region-us
|
# Alpha_0.3
This model is a fine-tuned version of mlabonne/AlphaMonarch-7B on the example dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 12.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# Alpha_0.3\n\nThis model is a fine-tuned version of mlabonne/AlphaMonarch-7B on the example dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 12.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.0.1+cu117\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #llama-factory #lora #generated_from_trainer #dataset-example_dataset #base_model-mlabonne/AlphaMonarch-7B #license-other #region-us \n",
"# Alpha_0.3\n\nThis model is a fine-tuned version of mlabonne/AlphaMonarch-7B on the example dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 12.0",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.0.1+cu117\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-climate-stance-detection
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1210
- Accuracy: 0.6244
- F1: 0.6184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0217 | 1.0 | 26 | 0.9528 | 0.5561 | 0.4944 |
| 0.8418 | 2.0 | 52 | 0.8457 | 0.6293 | 0.6200 |
| 0.6785 | 3.0 | 78 | 0.8370 | 0.6488 | 0.6361 |
| 0.5214 | 4.0 | 104 | 0.8629 | 0.6390 | 0.6308 |
| 0.4224 | 5.0 | 130 | 0.9791 | 0.6146 | 0.6066 |
| 0.3313 | 6.0 | 156 | 1.0028 | 0.6537 | 0.6507 |
| 0.2757 | 7.0 | 182 | 1.0350 | 0.6293 | 0.6198 |
| 0.2265 | 8.0 | 208 | 1.0909 | 0.6146 | 0.6086 |
| 0.1804 | 9.0 | 234 | 1.1283 | 0.6244 | 0.6184 |
| 0.1646 | 10.0 | 260 | 1.1210 | 0.6244 | 0.6184 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
{"language": ["en"], "tags": ["generated_from_trainer", "stance detection", "bert"], "metrics": ["accuracy", "f1"], "base_model": "bert-base-uncased", "widget": [{"text": "De-carbonization is not happening fast enough.", "example_title": "Positive example"}, {"text": "The sea is not rising.", "example_title": "Negative example"}, {"text": "Each of us emits about two pounds of carbon dioxide a day.", "example_title": "Neutral example"}], "model-index": [{"name": "bert-base-uncased-finetuned-climate-stance-detection", "results": []}]}
|
rldekkers/bert-base-uncased-finetuned-climate-stance-detection
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"stance detection",
"en",
"base_model:bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T11:58:06+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #stance detection #en #base_model-bert-base-uncased #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-climate-stance-detection
====================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1210
* Accuracy: 0.6244
* F1: 0.6184
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.32.1
* Pytorch 2.1.0+cpu
* Datasets 2.12.0
* Tokenizers 0.13.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.1.0+cpu\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #stance detection #en #base_model-bert-base-uncased #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.1.0+cpu\n* Datasets 2.12.0\n* Tokenizers 0.13.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
HenryCai1129/LlamaAdapter-llama2-happy-200
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:01:19+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# salt_language_ID
This model is a fine-tuned version of [google/t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 290.8962 | 0.01 | 10 | 305.3102 |
| 294.9642 | 0.01 | 20 | 296.0765 |
| 281.0769 | 0.02 | 30 | 279.4232 |
| 270.1917 | 0.03 | 40 | 254.2555 |
| 243.3464 | 0.04 | 50 | 224.8921 |
| 214.4873 | 0.04 | 60 | 196.3753 |
| 187.9743 | 0.05 | 70 | 135.2601 |
| 164.024 | 0.06 | 80 | 105.8822 |
| 143.6121 | 0.06 | 90 | 90.4786 |
| 126.019 | 0.07 | 100 | 75.9475 |
| 111.2358 | 0.08 | 110 | 64.2843 |
| 96.1458 | 0.09 | 120 | 51.4159 |
| 83.5523 | 0.09 | 130 | 37.5541 |
| 69.0164 | 0.1 | 140 | 24.2653 |
| 55.3427 | 0.11 | 150 | 19.5405 |
| 38.9215 | 0.11 | 160 | 12.6305 |
| 28.0225 | 0.12 | 170 | 10.0512 |
| 21.42 | 0.13 | 180 | 6.2506 |
| 15.1783 | 0.14 | 190 | 3.1231 |
| 11.7336 | 0.14 | 200 | 1.5384 |
| 9.3141 | 0.15 | 210 | 1.0360 |
| 6.9583 | 0.16 | 220 | 0.5647 |
| 5.7743 | 0.16 | 230 | 0.5395 |
| 4.7486 | 0.17 | 240 | 0.5611 |
| 3.7387 | 0.18 | 250 | 0.4738 |
| 3.3398 | 0.19 | 260 | 0.4057 |
| 3.1383 | 0.19 | 270 | 0.7111 |
| 2.5906 | 0.2 | 280 | 0.3963 |
| 2.4711 | 0.21 | 290 | 0.6025 |
| 2.0874 | 0.21 | 300 | 0.4192 |
| 2.2178 | 0.22 | 310 | 0.5130 |
| 1.9783 | 0.23 | 320 | 0.2481 |
| 1.9655 | 0.24 | 330 | 0.2947 |
| 1.677 | 0.24 | 340 | 0.1795 |
| 1.5847 | 0.25 | 350 | 0.4913 |
| 1.6727 | 0.26 | 360 | 0.3358 |
| 1.5304 | 0.26 | 370 | 0.4296 |
| 1.4964 | 0.27 | 380 | 0.1527 |
| 1.3643 | 0.28 | 390 | 0.4387 |
| 1.1374 | 0.29 | 400 | 0.1458 |
| 1.0719 | 0.29 | 410 | 0.1550 |
| 1.2705 | 0.3 | 420 | 0.3249 |
| 0.863 | 0.31 | 430 | 0.1285 |
| 0.9644 | 0.31 | 440 | 0.2107 |
| 0.9679 | 0.32 | 450 | 0.1729 |
| 0.9753 | 0.33 | 460 | 0.2159 |
| 0.7938 | 0.33 | 470 | 0.3218 |
| 0.739 | 0.34 | 480 | 0.1385 |
| 0.6355 | 0.35 | 490 | 0.4408 |
| 0.8578 | 0.36 | 500 | 0.1109 |
| 0.758 | 0.36 | 510 | 0.1401 |
| 0.5957 | 0.37 | 520 | 0.1470 |
| 0.5933 | 0.38 | 530 | 0.1215 |
| 0.6636 | 0.38 | 540 | 0.2513 |
| 0.6857 | 0.39 | 550 | 0.1407 |
| 0.5623 | 0.4 | 560 | 0.1325 |
| 0.5055 | 0.41 | 570 | 0.1624 |
| 0.5866 | 0.41 | 580 | 0.1763 |
| 0.5811 | 0.42 | 590 | 0.1567 |
| 0.5255 | 0.43 | 600 | 0.0916 |
| 0.4512 | 0.43 | 610 | 0.1853 |
| 0.4862 | 0.44 | 620 | 0.1168 |
| 0.4378 | 0.45 | 630 | 0.0929 |
| 0.4966 | 0.46 | 640 | 0.1254 |
| 0.5138 | 0.46 | 650 | 0.1285 |
| 0.4926 | 0.47 | 660 | 0.1072 |
| 0.4133 | 0.48 | 670 | 0.0727 |
| 0.3738 | 0.48 | 680 | 0.1100 |
| 0.4558 | 0.49 | 690 | 0.1214 |
| 0.3865 | 0.5 | 700 | 0.0773 |
| 0.4216 | 0.51 | 710 | 0.1595 |
| 0.3449 | 0.51 | 720 | 0.0754 |
| 0.4205 | 0.52 | 730 | 0.0982 |
| 0.3779 | 0.53 | 740 | 0.1105 |
| 0.3229 | 0.53 | 750 | 0.1698 |
| 0.3178 | 0.54 | 760 | 0.0753 |
| 0.3405 | 0.55 | 770 | 0.2353 |
| 0.3267 | 0.56 | 780 | 0.0656 |
| 0.2672 | 0.56 | 790 | 0.0955 |
| 0.4229 | 0.57 | 800 | 0.0635 |
| 0.3338 | 0.58 | 810 | 0.1630 |
| 0.337 | 0.58 | 820 | 0.0740 |
| 0.2945 | 0.59 | 830 | 0.1947 |
| 0.3374 | 0.6 | 840 | 0.1016 |
| 0.3101 | 0.61 | 850 | 0.0946 |
| 0.2595 | 0.61 | 860 | 0.0785 |
| 0.3179 | 0.62 | 870 | 0.0758 |
| 0.244 | 0.63 | 880 | 0.0606 |
| 0.3186 | 0.63 | 890 | 0.0601 |
| 0.2838 | 0.64 | 900 | 0.0847 |
| 0.2624 | 0.65 | 910 | 0.0638 |
| 0.2806 | 0.66 | 920 | 0.0915 |
| 0.2859 | 0.66 | 930 | 0.0630 |
| 0.213 | 0.67 | 940 | 0.0592 |
| 0.2174 | 0.68 | 950 | 0.0514 |
| 0.2555 | 0.68 | 960 | 0.0832 |
| 0.2361 | 0.69 | 970 | 0.0442 |
| 0.1854 | 0.7 | 980 | 0.1016 |
| 0.2414 | 0.71 | 990 | 0.0615 |
| 0.2522 | 0.71 | 1000 | 0.0420 |
| 0.2331 | 0.72 | 1010 | 0.0609 |
| 0.2191 | 0.73 | 1020 | 0.0605 |
| 0.1605 | 0.73 | 1030 | 0.0535 |
| 0.2002 | 0.74 | 1040 | 0.0607 |
| 0.2003 | 0.75 | 1050 | 0.0535 |
| 0.2306 | 0.76 | 1060 | 0.0597 |
| 0.2004 | 0.76 | 1070 | 0.0583 |
| 0.1524 | 0.77 | 1080 | 0.0653 |
| 0.2124 | 0.78 | 1090 | 0.0543 |
| 0.1635 | 0.78 | 1100 | 0.0490 |
| 0.2245 | 0.79 | 1110 | 0.0538 |
| 0.2144 | 0.8 | 1120 | 0.0411 |
| 0.2212 | 0.81 | 1130 | 0.0421 |
| 0.2369 | 0.81 | 1140 | 0.0373 |
| 0.1499 | 0.82 | 1150 | 0.1028 |
| 0.2434 | 0.83 | 1160 | 0.0515 |
| 0.214 | 0.83 | 1170 | 0.0388 |
| 0.1667 | 0.84 | 1180 | 0.0576 |
| 0.2044 | 0.85 | 1190 | 0.0360 |
| 0.1666 | 0.86 | 1200 | 0.0532 |
| 0.1679 | 0.86 | 1210 | 0.0389 |
| 0.2201 | 0.87 | 1220 | 0.0411 |
| 0.1384 | 0.88 | 1230 | 0.0653 |
| 0.2331 | 0.88 | 1240 | 0.0364 |
| 0.1344 | 0.89 | 1250 | 0.0432 |
| 0.1661 | 0.9 | 1260 | 0.0604 |
| 0.1689 | 0.91 | 1270 | 0.0426 |
| 0.1465 | 0.91 | 1280 | 0.0448 |
| 0.2009 | 0.92 | 1290 | 0.0389 |
| 0.1384 | 0.93 | 1300 | 0.0362 |
| 0.179 | 0.93 | 1310 | 0.0466 |
| 0.1728 | 0.94 | 1320 | 0.0373 |
| 0.139 | 0.95 | 1330 | 0.0469 |
| 0.1359 | 0.96 | 1340 | 0.0339 |
| 0.1666 | 0.96 | 1350 | 0.0390 |
| 0.0943 | 0.97 | 1360 | 0.0359 |
| 0.1155 | 0.98 | 1370 | 0.0499 |
| 0.1176 | 0.98 | 1380 | 0.0390 |
| 0.1034 | 0.99 | 1390 | 0.0603 |
| 0.1147 | 1.0 | 1400 | 0.0370 |
| 0.127 | 1.0 | 1410 | 0.0513 |
| 0.1474 | 1.01 | 1420 | 0.0341 |
| 0.1509 | 1.02 | 1430 | 0.0378 |
| 0.0809 | 1.03 | 1440 | 0.0521 |
| 0.1262 | 1.03 | 1450 | 0.0320 |
| 0.1305 | 1.04 | 1460 | 0.0484 |
| 0.1552 | 1.05 | 1470 | 0.0311 |
| 0.1147 | 1.05 | 1480 | 0.0341 |
| 0.1099 | 1.06 | 1490 | 0.0330 |
| 0.1196 | 1.07 | 1500 | 0.0332 |
| 0.0823 | 1.08 | 1510 | 0.0475 |
| 0.1426 | 1.08 | 1520 | 0.0377 |
| 0.1118 | 1.09 | 1530 | 0.0336 |
| 0.0665 | 1.1 | 1540 | 0.0328 |
| 0.1171 | 1.1 | 1550 | 0.0324 |
| 0.1166 | 1.11 | 1560 | 0.0447 |
| 0.1348 | 1.12 | 1570 | 0.0342 |
| 0.1327 | 1.13 | 1580 | 0.0413 |
| 0.1099 | 1.13 | 1590 | 0.0335 |
| 0.0801 | 1.14 | 1600 | 0.0370 |
| 0.13 | 1.15 | 1610 | 0.0389 |
| 0.1238 | 1.15 | 1620 | 0.0516 |
| 0.1092 | 1.16 | 1630 | 0.0311 |
| 0.1007 | 1.17 | 1640 | 0.0399 |
| 0.1142 | 1.18 | 1650 | 0.0383 |
| 0.0893 | 1.18 | 1660 | 0.0328 |
| 0.1115 | 1.19 | 1670 | 0.0536 |
| 0.0861 | 1.2 | 1680 | 0.0289 |
| 0.1141 | 1.2 | 1690 | 0.0334 |
| 0.1487 | 1.21 | 1700 | 0.0314 |
| 0.1214 | 1.22 | 1710 | 0.0371 |
| 0.0876 | 1.23 | 1720 | 0.0296 |
| 0.0927 | 1.23 | 1730 | 0.0292 |
| 0.0651 | 1.24 | 1740 | 0.0309 |
| 0.1355 | 1.25 | 1750 | 0.0372 |
| 0.0883 | 1.25 | 1760 | 0.0359 |
| 0.1067 | 1.26 | 1770 | 0.0305 |
| 0.1166 | 1.27 | 1780 | 0.0368 |
| 0.0603 | 1.28 | 1790 | 0.0306 |
| 0.073 | 1.28 | 1800 | 0.0292 |
| 0.1029 | 1.29 | 1810 | 0.0308 |
| 0.1019 | 1.3 | 1820 | 0.0279 |
| 0.0989 | 1.3 | 1830 | 0.0356 |
| 0.1132 | 1.31 | 1840 | 0.0506 |
| 0.0978 | 1.32 | 1850 | 0.0280 |
| 0.0743 | 1.33 | 1860 | 0.0305 |
| 0.0573 | 1.33 | 1870 | 0.0265 |
| 0.0861 | 1.34 | 1880 | 0.0303 |
| 0.0782 | 1.35 | 1890 | 0.0467 |
| 0.0931 | 1.35 | 1900 | 0.0286 |
| 0.0812 | 1.36 | 1910 | 0.0329 |
| 0.0993 | 1.37 | 1920 | 0.0440 |
| 0.1547 | 1.38 | 1930 | 0.0411 |
| 0.081 | 1.38 | 1940 | 0.0308 |
| 0.1014 | 1.39 | 1950 | 0.0289 |
| 0.0674 | 1.4 | 1960 | 0.0362 |
| 0.1119 | 1.4 | 1970 | 0.0412 |
| 0.0996 | 1.41 | 1980 | 0.0267 |
| 0.1239 | 1.42 | 1990 | 0.0272 |
| 0.0919 | 1.43 | 2000 | 0.0334 |
| 0.1352 | 1.43 | 2010 | 0.0276 |
| 0.068 | 1.44 | 2020 | 0.0283 |
| 0.094 | 1.45 | 2030 | 0.0282 |
| 0.0844 | 1.45 | 2040 | 0.0315 |
| 0.0486 | 1.46 | 2050 | 0.0247 |
| 0.0721 | 1.47 | 2060 | 0.0275 |
| 0.1169 | 1.48 | 2070 | 0.0331 |
| 0.1055 | 1.48 | 2080 | 0.0292 |
| 0.0665 | 1.49 | 2090 | 0.0241 |
| 0.078 | 1.5 | 2100 | 0.0245 |
| 0.0923 | 1.5 | 2110 | 0.0277 |
| 0.0983 | 1.51 | 2120 | 0.0292 |
| 0.0993 | 1.52 | 2130 | 0.0241 |
| 0.0381 | 1.53 | 2140 | 0.0316 |
| 0.0614 | 1.53 | 2150 | 0.0269 |
| 0.0616 | 1.54 | 2160 | 0.0239 |
| 0.0535 | 1.55 | 2170 | 0.0246 |
| 0.0645 | 1.55 | 2180 | 0.0286 |
| 0.0895 | 1.56 | 2190 | 0.0310 |
| 0.0963 | 1.57 | 2200 | 0.0266 |
| 0.087 | 1.58 | 2210 | 0.0253 |
| 0.0976 | 1.58 | 2220 | 0.0252 |
| 0.1 | 1.59 | 2230 | 0.0332 |
| 0.0679 | 1.6 | 2240 | 0.0301 |
| 0.0949 | 1.6 | 2250 | 0.0263 |
| 0.0508 | 1.61 | 2260 | 0.0232 |
| 0.0619 | 1.62 | 2270 | 0.0274 |
| 0.0649 | 1.63 | 2280 | 0.0239 |
| 0.0837 | 1.63 | 2290 | 0.0253 |
| 0.0903 | 1.64 | 2300 | 0.0262 |
| 0.0655 | 1.65 | 2310 | 0.0274 |
| 0.0782 | 1.65 | 2320 | 0.0340 |
| 0.0905 | 1.66 | 2330 | 0.0279 |
| 0.1116 | 1.67 | 2340 | 0.0278 |
| 0.06 | 1.67 | 2350 | 0.0256 |
| 0.0915 | 1.68 | 2360 | 0.0285 |
| 0.0826 | 1.69 | 2370 | 0.0259 |
| 0.0593 | 1.7 | 2380 | 0.0265 |
| 0.0551 | 1.7 | 2390 | 0.0247 |
| 0.0732 | 1.71 | 2400 | 0.0252 |
| 0.0936 | 1.72 | 2410 | 0.0271 |
| 0.0706 | 1.72 | 2420 | 0.0263 |
| 0.0544 | 1.73 | 2430 | 0.0250 |
| 0.0606 | 1.74 | 2440 | 0.0247 |
| 0.0707 | 1.75 | 2450 | 0.0256 |
| 0.0759 | 1.75 | 2460 | 0.0269 |
| 0.0688 | 1.76 | 2470 | 0.0260 |
| 0.0537 | 1.77 | 2480 | 0.0239 |
| 0.0979 | 1.77 | 2490 | 0.0246 |
| 0.0899 | 1.78 | 2500 | 0.0263 |
| 0.0834 | 1.79 | 2510 | 0.0274 |
| 0.0597 | 1.8 | 2520 | 0.0253 |
| 0.0807 | 1.8 | 2530 | 0.0250 |
| 0.0902 | 1.81 | 2540 | 0.0221 |
| 0.0849 | 1.82 | 2550 | 0.0223 |
| 0.0722 | 1.82 | 2560 | 0.0222 |
| 0.0647 | 1.83 | 2570 | 0.0211 |
| 0.0789 | 1.84 | 2580 | 0.0217 |
| 0.0839 | 1.85 | 2590 | 0.0248 |
| 0.0761 | 1.85 | 2600 | 0.0252 |
| 0.1191 | 1.86 | 2610 | 0.0267 |
| 0.093 | 1.87 | 2620 | 0.0254 |
| 0.0581 | 1.87 | 2630 | 0.0245 |
| 0.0776 | 1.88 | 2640 | 0.0246 |
| 0.0699 | 1.89 | 2650 | 0.0242 |
| 0.07 | 1.9 | 2660 | 0.0246 |
| 0.0523 | 1.9 | 2670 | 0.0238 |
| 0.0773 | 1.91 | 2680 | 0.0226 |
| 0.0781 | 1.92 | 2690 | 0.0221 |
| 0.0593 | 1.92 | 2700 | 0.0223 |
| 0.0955 | 1.93 | 2710 | 0.0234 |
| 0.0662 | 1.94 | 2720 | 0.0235 |
| 0.0704 | 1.95 | 2730 | 0.0229 |
| 0.0785 | 1.95 | 2740 | 0.0222 |
| 0.0778 | 1.96 | 2750 | 0.0219 |
| 0.0462 | 1.97 | 2760 | 0.0220 |
| 0.0596 | 1.97 | 2770 | 0.0222 |
| 0.0599 | 1.98 | 2780 | 0.0224 |
| 0.0489 | 1.99 | 2790 | 0.0226 |
| 0.0738 | 2.0 | 2800 | 0.0226 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/t5-efficient-tiny", "model-index": [{"name": "salt_language_ID", "results": []}]}
|
yigagilbert/salt_language_ID
| null |
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-efficient-tiny",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:01:35+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mt5 #text2text-generation #generated_from_trainer #base_model-google/t5-efficient-tiny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
salt\_language\_ID
==================
This model is a fine-tuned version of google/t5-efficient-tiny on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0226
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.1.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mt5 #text2text-generation #generated_from_trainer #base_model-google/t5-efficient-tiny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vsft-idefics2
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "vsft-idefics2", "results": []}]}
|
edbeeching/vsft-idefics2
| null |
[
"transformers",
"safetensors",
"idefics2",
"pretraining",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:04:59+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #idefics2 #pretraining #trl #sft #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #endpoints_compatible #region-us
|
# vsft-idefics2
This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# vsft-idefics2\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #idefics2 #pretraining #trl #sft #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #endpoints_compatible #region-us \n",
"# vsft-idefics2\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
|
Charishma27/200_data_model
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null |
2024-04-15T12:05:09+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
[
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-to-image
|
diffusers
|
# DreamBooth - SidXXD/Attn_Maps-test
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
{"license": "creativeml-openrail-m", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "dreambooth"], "base_model": "CompVis/stable-diffusion-v1-4", "instance_prompt": "a photo of sks dog", "inference": true}
|
SidXXD/Attn_Maps-test
| null |
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-15T12:06:46+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #dreambooth #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - SidXXD/Attn_Maps-test
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
[
"# DreamBooth - SidXXD/Attn_Maps-test\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False."
] |
[
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #dreambooth #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - SidXXD/Attn_Maps-test\n\nThis is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base_emotion_0415
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1231
- Accuracy: 0.9555
- F1: 0.9553
- Precision:: 0.9564
- Recall: 0.9555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision: | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:----------:|:------:|
| 0.1195 | 1.0 | 569 | 0.1104 | 0.9551 | 0.9548 | 0.9566 | 0.9551 |
| 0.1016 | 2.0 | 1138 | 0.1108 | 0.9557 | 0.9554 | 0.9569 | 0.9557 |
| 0.0946 | 3.0 | 1707 | 0.1156 | 0.9575 | 0.9573 | 0.9588 | 0.9574 |
| 0.0898 | 4.0 | 2276 | 0.1175 | 0.9560 | 0.9558 | 0.9570 | 0.9560 |
| 0.0839 | 5.0 | 2845 | 0.1231 | 0.9555 | 0.9553 | 0.9564 | 0.9555 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "recall"], "base_model": "bert-base-uncased", "model-index": [{"name": "base_emotion_0415", "results": []}]}
|
LaoLaoFish/base_emotion_0415
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:07:53+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
base\_emotion\_0415
===================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1231
* Accuracy: 0.9555
* F1: 0.9553
* Precision:: 0.9564
* Recall: 0.9555
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Grayx/sad_pepe_21
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:09:03+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
Number of experts present in the library: 263
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| sciq_Multiple_Choice | phi-2 | sordonia/flan-10k-flat/sciq_Multiple_Choice | lora |
| wiki_hop_original_choose_best_object_interrogative_1 | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_1 | lora |
| squad_v2_0_3_0_0 | phi-2 | sordonia/flan-10k-flat/squad_v2_0_3_0_0 | lora |
| wiki_qa_exercise | phi-2 | sordonia/flan-10k-flat/wiki_qa_exercise | lora |
| race_high_Taking_a_test | phi-2 | sordonia/flan-10k-flat/race_high_Taking_a_test | lora |
| adversarial_qa_dbert_generate_question | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbert_generate_question | lora |
| quoref_Found_Context_Online | phi-2 | sordonia/flan-10k-flat/quoref_Found_Context_Online | lora |
| web_questions_get_the_answer | phi-2 | sordonia/flan-10k-flat/web_questions_get_the_answer | lora |
| duorc_SelfRC_generate_question_by_answer | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| quarel_testing_students | phi-2 | sordonia/flan-10k-flat/quarel_testing_students | lora |
| qasc_qa_with_separated_facts_1 | phi-2 | sordonia/flan-10k-flat/qasc_qa_with_separated_facts_1 | lora |
| wiki_qa_Is_This_True_ | phi-2 | sordonia/flan-10k-flat/wiki_qa_Is_This_True_ | lora |
| race_high_Read_the_article_and_answer_the_question_no_option_ | phi-2 | sordonia/flan-10k-flat/race_high_Read_the_article_and_answer_the_question_no_option_ | lora |
| cot_gsm8k_ii | phi-2 | sordonia/flan-10k-flat/cot_gsm8k_ii | lora |
| gem_wiki_lingua_english_en_1_1_0 | phi-2 | sordonia/flan-10k-flat/gem_wiki_lingua_english_en_1_1_0 | lora |
| unified_qa_science_inst | phi-2 | sordonia/flan-10k-flat/unified_qa_science_inst | lora |
| quartz_use_info_from_paragraph_question | phi-2 | sordonia/flan-10k-flat/quartz_use_info_from_paragraph_question | lora |
| wiki_hop_original_generate_object | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_generate_object | lora |
| quoref_What_Is_The_Answer | phi-2 | sordonia/flan-10k-flat/quoref_What_Is_The_Answer | lora |
| adversarial_qa_droberta_generate_question | phi-2 | sordonia/flan-10k-flat/adversarial_qa_droberta_generate_question | lora |
| wiki_bio_comprehension | phi-2 | sordonia/flan-10k-flat/wiki_bio_comprehension | lora |
| adversarial_qa_dbidaf_question_context_answer | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_question_context_answer | lora |
| wiki_bio_what_content | phi-2 | sordonia/flan-10k-flat/wiki_bio_what_content | lora |
| web_questions_whats_the_answer | phi-2 | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| wiqa_what_is_the_missing_first_step | phi-2 | sordonia/flan-10k-flat/wiqa_what_is_the_missing_first_step | lora |
| adversarial_qa_droberta_question_context_answer | phi-2 | sordonia/flan-10k-flat/adversarial_qa_droberta_question_context_answer | lora |
| ropes_plain_bottom_hint | phi-2 | sordonia/flan-10k-flat/ropes_plain_bottom_hint | lora |
| kilt_tasks_hotpotqa_combining_facts | phi-2 | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_combining_facts | lora |
| cos_e_v1_11_aligned_with_common_sense | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_aligned_with_common_sense | lora |
| gem_web_nlg_en_1_1_0 | phi-2 | sordonia/flan-10k-flat/gem_web_nlg_en_1_1_0 | lora |
| web_questions_potential_correct_answer | phi-2 | sordonia/flan-10k-flat/web_questions_potential_correct_answer | lora |
| wiki_qa_found_on_google | phi-2 | sordonia/flan-10k-flat/wiki_qa_found_on_google | lora |
| duorc_ParaphraseRC_extract_answer | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_extract_answer | lora |
| wmt16_translate_de_en_1_0_0 | phi-2 | sordonia/flan-10k-flat/wmt16_translate_de_en_1_0_0 | lora |
| quail_no_prompt_id | phi-2 | sordonia/flan-10k-flat/quail_no_prompt_id | lora |
| quoref_Guess_Title_For_Context | phi-2 | sordonia/flan-10k-flat/quoref_Guess_Title_For_Context | lora |
| duorc_SelfRC_decide_worth_it | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_decide_worth_it | lora |
| ropes_prompt_mix | phi-2 | sordonia/flan-10k-flat/ropes_prompt_mix | lora |
| adversarial_qa_droberta_tell_what_it_is | phi-2 | sordonia/flan-10k-flat/adversarial_qa_droberta_tell_what_it_is | lora |
| quail_context_question_answer_description_id | phi-2 | sordonia/flan-10k-flat/quail_context_question_answer_description_id | lora |
| gem_common_gen_1_1_0 | phi-2 | sordonia/flan-10k-flat/gem_common_gen_1_1_0 | lora |
| duorc_ParaphraseRC_answer_question | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| super_glue_cb_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_cb_1_0_2 | lora |
| cnn_dailymail_3_4_0 | phi-2 | sordonia/flan-10k-flat/cnn_dailymail_3_4_0 | lora |
| race_high_Write_a_multi_choice_question_options_given_ | phi-2 | sordonia/flan-10k-flat/race_high_Write_a_multi_choice_question_options_given_ | lora |
| winogrande_1_1_0 | phi-2 | sordonia/flan-10k-flat/winogrande_1_1_0 | lora |
| duorc_SelfRC_extract_answer | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_extract_answer | lora |
| trec_1_0_0 | phi-2 | sordonia/flan-10k-flat/trec_1_0_0 | lora |
| yelp_polarity_reviews_0_2_0 | phi-2 | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| race_high_Select_the_best_answer | phi-2 | sordonia/flan-10k-flat/race_high_Select_the_best_answer | lora |
| para_crawl_enes | phi-2 | sordonia/flan-10k-flat/para_crawl_enes | lora |
| qasc_is_correct_1 | phi-2 | sordonia/flan-10k-flat/qasc_is_correct_1 | lora |
| app_reviews_generate_review | phi-2 | sordonia/flan-10k-flat/app_reviews_generate_review | lora |
| ropes_read_background_situation | phi-2 | sordonia/flan-10k-flat/ropes_read_background_situation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | phi-2 | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| stream_aqua | phi-2 | sordonia/flan-10k-flat/stream_aqua | lora |
| drop_2_0_0 | phi-2 | sordonia/flan-10k-flat/drop_2_0_0 | lora |
| wiki_hop_original_choose_best_object_affirmative_1 | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_1 | lora |
| adversarial_qa_dbidaf_answer_the_following_q | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| social_i_qa_Generate_answer | phi-2 | sordonia/flan-10k-flat/social_i_qa_Generate_answer | lora |
| stream_aqua_ii | phi-2 | sordonia/flan-10k-flat/stream_aqua_ii | lora |
| glue_sst2_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_sst2_2_0_0 | lora |
| cot_esnli | phi-2 | sordonia/flan-10k-flat/cot_esnli | lora |
| race_high_Select_the_best_answer_no_instructions_ | phi-2 | sordonia/flan-10k-flat/race_high_Select_the_best_answer_no_instructions_ | lora |
| duorc_SelfRC_build_story_around_qa | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_build_story_around_qa | lora |
| cot_esnli_ii | phi-2 | sordonia/flan-10k-flat/cot_esnli_ii | lora |
| quail_no_prompt_text | phi-2 | sordonia/flan-10k-flat/quail_no_prompt_text | lora |
| ropes_given_background_situation | phi-2 | sordonia/flan-10k-flat/ropes_given_background_situation | lora |
| quarel_logic_test | phi-2 | sordonia/flan-10k-flat/quarel_logic_test | lora |
| adversarial_qa_dbidaf_based_on | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_based_on | lora |
| super_glue_copa_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_copa_1_0_2 | lora |
| cos_e_v1_11_i_think | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_i_think | lora |
| quail_context_question_description_answer_text | phi-2 | sordonia/flan-10k-flat/quail_context_question_description_answer_text | lora |
| math_dataset_algebra__linear_1d_1_0_0 | phi-2 | sordonia/flan-10k-flat/math_dataset_algebra__linear_1d_1_0_0 | lora |
| cosmos_qa_1_0_0 | phi-2 | sordonia/flan-10k-flat/cosmos_qa_1_0_0 | lora |
| wiqa_effect_with_label_answer | phi-2 | sordonia/flan-10k-flat/wiqa_effect_with_label_answer | lora |
| app_reviews_convert_to_star_rating | phi-2 | sordonia/flan-10k-flat/app_reviews_convert_to_star_rating | lora |
| qasc_qa_with_separated_facts_2 | phi-2 | sordonia/flan-10k-flat/qasc_qa_with_separated_facts_2 | lora |
| race_middle_Select_the_best_answer | phi-2 | sordonia/flan-10k-flat/race_middle_Select_the_best_answer | lora |
| quartz_having_read_above_passage | phi-2 | sordonia/flan-10k-flat/quartz_having_read_above_passage | lora |
| glue_qqp_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_qqp_2_0_0 | lora |
| cos_e_v1_11_question_description_option_id | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_id | lora |
| cos_e_v1_11_question_option_description_text | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_question_option_description_text | lora |
| imdb_reviews_plain_text_1_0_0 | phi-2 | sordonia/flan-10k-flat/imdb_reviews_plain_text_1_0_0 | lora |
| wiki_hop_original_choose_best_object_affirmative_2 | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_2 | lora |
| natural_questions_open_1_0_0 | phi-2 | sordonia/flan-10k-flat/natural_questions_open_1_0_0 | lora |
| wiqa_effect_with_string_answer | phi-2 | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| cos_e_v1_11_rationale | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_rationale | lora |
| race_middle_Write_a_multi_choice_question_options_given_ | phi-2 | sordonia/flan-10k-flat/race_middle_Write_a_multi_choice_question_options_given_ | lora |
| wiki_bio_guess_person | phi-2 | sordonia/flan-10k-flat/wiki_bio_guess_person | lora |
| hellaswag_1_1_0 | phi-2 | sordonia/flan-10k-flat/hellaswag_1_1_0 | lora |
| wiqa_does_the_supposed_perturbation_have_an_effect | phi-2 | sordonia/flan-10k-flat/wiqa_does_the_supposed_perturbation_have_an_effect | lora |
| trivia_qa_rc_1_1_0 | phi-2 | sordonia/flan-10k-flat/trivia_qa_rc_1_1_0 | lora |
| lambada_1_0_0 | phi-2 | sordonia/flan-10k-flat/lambada_1_0_0 | lora |
| quoref_Read_And_Extract_ | phi-2 | sordonia/flan-10k-flat/quoref_Read_And_Extract_ | lora |
| quail_context_description_question_answer_id | phi-2 | sordonia/flan-10k-flat/quail_context_description_question_answer_id | lora |
| quail_context_description_question_answer_text | phi-2 | sordonia/flan-10k-flat/quail_context_description_question_answer_text | lora |
| duorc_SelfRC_question_answering | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_question_answering | lora |
| cot_sensemaking_ii | phi-2 | sordonia/flan-10k-flat/cot_sensemaking_ii | lora |
| fix_punct | phi-2 | sordonia/flan-10k-flat/fix_punct | lora |
| squad_v1_1_3_0_0 | phi-2 | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| coqa_1_0_0 | phi-2 | sordonia/flan-10k-flat/coqa_1_0_0 | lora |
| glue_qnli_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_qnli_2_0_0 | lora |
| wiki_qa_Jeopardy_style | phi-2 | sordonia/flan-10k-flat/wiki_qa_Jeopardy_style | lora |
| qasc_qa_with_separated_facts_5 | phi-2 | sordonia/flan-10k-flat/qasc_qa_with_separated_facts_5 | lora |
| glue_mnli_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_mnli_2_0_0 | lora |
| wiki_bio_key_content | phi-2 | sordonia/flan-10k-flat/wiki_bio_key_content | lora |
| dream_generate_first_utterance | phi-2 | sordonia/flan-10k-flat/dream_generate_first_utterance | lora |
| quartz_read_passage_below_choose | phi-2 | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| web_questions_question_answer | phi-2 | sordonia/flan-10k-flat/web_questions_question_answer | lora |
| glue_stsb_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_stsb_2_0_0 | lora |
| wmt16_translate_tr_en_1_0_0 | phi-2 | sordonia/flan-10k-flat/wmt16_translate_tr_en_1_0_0 | lora |
| cot_qasc | phi-2 | sordonia/flan-10k-flat/cot_qasc | lora |
| duorc_ParaphraseRC_title_generation | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| quail_description_context_question_answer_id | phi-2 | sordonia/flan-10k-flat/quail_description_context_question_answer_id | lora |
| wiki_qa_Topic_Prediction_Question_Only | phi-2 | sordonia/flan-10k-flat/wiki_qa_Topic_Prediction_Question_Only | lora |
| quoref_Find_Answer | phi-2 | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| social_i_qa_I_was_wondering | phi-2 | sordonia/flan-10k-flat/social_i_qa_I_was_wondering | lora |
| wiki_hop_original_choose_best_object_affirmative_3 | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_affirmative_3 | lora |
| duorc_ParaphraseRC_build_story_around_qa | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_build_story_around_qa | lora |
| qasc_qa_with_separated_facts_3 | phi-2 | sordonia/flan-10k-flat/qasc_qa_with_separated_facts_3 | lora |
| race_middle_Is_this_the_right_answer | phi-2 | sordonia/flan-10k-flat/race_middle_Is_this_the_right_answer | lora |
| paws_wiki_1_1_0 | phi-2 | sordonia/flan-10k-flat/paws_wiki_1_1_0 | lora |
| app_reviews_categorize_rating_using_review | phi-2 | sordonia/flan-10k-flat/app_reviews_categorize_rating_using_review | lora |
| anli_r3_0_1_0 | phi-2 | sordonia/flan-10k-flat/anli_r3_0_1_0 | lora |
| app_reviews_convert_to_rating | phi-2 | sordonia/flan-10k-flat/app_reviews_convert_to_rating | lora |
| wiqa_what_is_the_final_step_of_the_following_process | phi-2 | sordonia/flan-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| adversarial_qa_droberta_answer_the_following_q | phi-2 | sordonia/flan-10k-flat/adversarial_qa_droberta_answer_the_following_q | lora |
| wiki_qa_Decide_good_answer | phi-2 | sordonia/flan-10k-flat/wiki_qa_Decide_good_answer | lora |
| adversarial_qa_dbert_answer_the_following_q | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbert_answer_the_following_q | lora |
| gem_dart_1_1_0 | phi-2 | sordonia/flan-10k-flat/gem_dart_1_1_0 | lora |
| adversarial_qa_dbert_tell_what_it_is | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbert_tell_what_it_is | lora |
| quarel_choose_between | phi-2 | sordonia/flan-10k-flat/quarel_choose_between | lora |
| duorc_ParaphraseRC_generate_question_by_answer | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_generate_question_by_answer | lora |
| wiki_hop_original_generate_subject | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_generate_subject | lora |
| dream_baseline | phi-2 | sordonia/flan-10k-flat/dream_baseline | lora |
| cos_e_v1_11_question_description_option_text | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| aeslc_1_0_0 | phi-2 | sordonia/flan-10k-flat/aeslc_1_0_0 | lora |
| anli_r2_0_1_0 | phi-2 | sordonia/flan-10k-flat/anli_r2_0_1_0 | lora |
| dbpedia_14_given_list_what_category_does_the_paragraph_belong_to | phi-2 | sordonia/flan-10k-flat/dbpedia_14_given_list_what_category_does_the_paragraph_belong_to | lora |
| quail_context_question_description_answer_id | phi-2 | sordonia/flan-10k-flat/quail_context_question_description_answer_id | lora |
| race_middle_Select_the_best_answer_no_instructions_ | phi-2 | sordonia/flan-10k-flat/race_middle_Select_the_best_answer_no_instructions_ | lora |
| wmt16_translate_ro_en_1_0_0 | phi-2 | sordonia/flan-10k-flat/wmt16_translate_ro_en_1_0_0 | lora |
| race_high_Is_this_the_right_answer | phi-2 | sordonia/flan-10k-flat/race_high_Is_this_the_right_answer | lora |
| quail_description_context_question_text | phi-2 | sordonia/flan-10k-flat/quail_description_context_question_text | lora |
| sciq_Direct_Question_Closed_Book_ | phi-2 | sordonia/flan-10k-flat/sciq_Direct_Question_Closed_Book_ | lora |
| openbookqa_0_1_0 | phi-2 | sordonia/flan-10k-flat/openbookqa_0_1_0 | lora |
| duorc_SelfRC_title_generation | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_title_generation | lora |
| cot_gsm8k | phi-2 | sordonia/flan-10k-flat/cot_gsm8k | lora |
| quartz_answer_question_below | phi-2 | sordonia/flan-10k-flat/quartz_answer_question_below | lora |
| snli_1_1_0 | phi-2 | sordonia/flan-10k-flat/snli_1_1_0 | lora |
| sciq_Multiple_Choice_Closed_Book_ | phi-2 | sordonia/flan-10k-flat/sciq_Multiple_Choice_Closed_Book_ | lora |
| cot_strategyqa | phi-2 | sordonia/flan-10k-flat/cot_strategyqa | lora |
| qasc_qa_with_separated_facts_4 | phi-2 | sordonia/flan-10k-flat/qasc_qa_with_separated_facts_4 | lora |
| ropes_prompt_bottom_no_hint | phi-2 | sordonia/flan-10k-flat/ropes_prompt_bottom_no_hint | lora |
| duorc_SelfRC_generate_question | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_generate_question | lora |
| quartz_given_the_fact_answer_the_q | phi-2 | sordonia/flan-10k-flat/quartz_given_the_fact_answer_the_q | lora |
| anli_r1_0_1_0 | phi-2 | sordonia/flan-10k-flat/anli_r1_0_1_0 | lora |
| wiki_qa_Topic_Prediction_Question_and_Answer_Pair | phi-2 | sordonia/flan-10k-flat/wiki_qa_Topic_Prediction_Question_and_Answer_Pair | lora |
| wiki_qa_Direct_Answer_to_Question | phi-2 | sordonia/flan-10k-flat/wiki_qa_Direct_Answer_to_Question | lora |
| qasc_is_correct_2 | phi-2 | sordonia/flan-10k-flat/qasc_is_correct_2 | lora |
| wiki_hop_original_generate_subject_and_object | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_generate_subject_and_object | lora |
| ai2_arc_ARC_Challenge_1_0_0 | phi-2 | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| race_middle_Select_the_best_answer_generate_span_ | phi-2 | sordonia/flan-10k-flat/race_middle_Select_the_best_answer_generate_span_ | lora |
| quail_context_question_answer_description_text | phi-2 | sordonia/flan-10k-flat/quail_context_question_answer_description_text | lora |
| quail_context_question_description_text | phi-2 | sordonia/flan-10k-flat/quail_context_question_description_text | lora |
| wiki_hop_original_choose_best_object_interrogative_2 | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| duorc_SelfRC_movie_director | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_movie_director | lora |
| quoref_Given_Context_Answer_Question | phi-2 | sordonia/flan-10k-flat/quoref_Given_Context_Answer_Question | lora |
| wiki_hop_original_explain_relation | phi-2 | sordonia/flan-10k-flat/wiki_hop_original_explain_relation | lora |
| super_glue_record_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_record_1_0_2 | lora |
| adversarial_qa_dbidaf_tell_what_it_is | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_tell_what_it_is | lora |
| cot_ecqa_ii | phi-2 | sordonia/flan-10k-flat/cot_ecqa_ii | lora |
| ropes_background_new_situation_answer | phi-2 | sordonia/flan-10k-flat/ropes_background_new_situation_answer | lora |
| web_questions_short_general_knowledge_q | phi-2 | sordonia/flan-10k-flat/web_questions_short_general_knowledge_q | lora |
| wiqa_what_might_be_the_first_step_of_the_process | phi-2 | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| duorc_SelfRC_answer_question | phi-2 | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| ag_news_subset_1_0_0 | phi-2 | sordonia/flan-10k-flat/ag_news_subset_1_0_0 | lora |
| race_middle_Write_a_multi_choice_question_for_the_following_article | phi-2 | sordonia/flan-10k-flat/race_middle_Write_a_multi_choice_question_for_the_following_article | lora |
| wmt14_translate_fr_en_1_0_0 | phi-2 | sordonia/flan-10k-flat/wmt14_translate_fr_en_1_0_0 | lora |
| sciq_Direct_Question | phi-2 | sordonia/flan-10k-flat/sciq_Direct_Question | lora |
| super_glue_multirc_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_multirc_1_0_2 | lora |
| dbpedia_14_given_a_choice_of_categories_ | phi-2 | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| super_glue_wic_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_wic_1_0_2 | lora |
| social_i_qa_Show_choices_and_generate_answer | phi-2 | sordonia/flan-10k-flat/social_i_qa_Show_choices_and_generate_answer | lora |
| wiqa_what_might_be_the_last_step_of_the_process | phi-2 | sordonia/flan-10k-flat/wiqa_what_might_be_the_last_step_of_the_process | lora |
| quoref_Answer_Question_Given_Context | phi-2 | sordonia/flan-10k-flat/quoref_Answer_Question_Given_Context | lora |
| quoref_Context_Contains_Answer | phi-2 | sordonia/flan-10k-flat/quoref_Context_Contains_Answer | lora |
| cos_e_v1_11_description_question_option_text | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_description_question_option_text | lora |
| adversarial_qa_dbert_based_on | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbert_based_on | lora |
| multi_news_1_0_0 | phi-2 | sordonia/flan-10k-flat/multi_news_1_0_0 | lora |
| cos_e_v1_11_generate_explanation_given_text | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_generate_explanation_given_text | lora |
| true_case | phi-2 | sordonia/flan-10k-flat/true_case | lora |
| duorc_ParaphraseRC_movie_director | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_movie_director | lora |
| quartz_answer_question_based_on | phi-2 | sordonia/flan-10k-flat/quartz_answer_question_based_on | lora |
| bool_q_1_0_0 | phi-2 | sordonia/flan-10k-flat/bool_q_1_0_0 | lora |
| quoref_Guess_Answer | phi-2 | sordonia/flan-10k-flat/quoref_Guess_Answer | lora |
| quarel_do_not_use | phi-2 | sordonia/flan-10k-flat/quarel_do_not_use | lora |
| cos_e_v1_11_explain_why_human | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_explain_why_human | lora |
| wiki_qa_Generate_Question_from_Topic | phi-2 | sordonia/flan-10k-flat/wiki_qa_Generate_Question_from_Topic | lora |
| kilt_tasks_hotpotqa_straighforward_qa | phi-2 | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_straighforward_qa | lora |
| adversarial_qa_dbidaf_generate_question | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| dbpedia_14_pick_one_category_for_the_following_text | phi-2 | sordonia/flan-10k-flat/dbpedia_14_pick_one_category_for_the_following_text | lora |
| kilt_tasks_hotpotqa_final_exam | phi-2 | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_final_exam | lora |
| quoref_Answer_Friend_Question | phi-2 | sordonia/flan-10k-flat/quoref_Answer_Friend_Question | lora |
| race_high_Write_a_multi_choice_question_for_the_following_article | phi-2 | sordonia/flan-10k-flat/race_high_Write_a_multi_choice_question_for_the_following_article | lora |
| ropes_prompt_beginning | phi-2 | sordonia/flan-10k-flat/ropes_prompt_beginning | lora |
| adversarial_qa_dbert_question_context_answer | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbert_question_context_answer | lora |
| cot_creak | phi-2 | sordonia/flan-10k-flat/cot_creak | lora |
| gem_e2e_nlg_1_1_0 | phi-2 | sordonia/flan-10k-flat/gem_e2e_nlg_1_1_0 | lora |
| cos_e_v1_11_description_question_option_id | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_description_question_option_id | lora |
| social_i_qa_Generate_the_question_from_the_answer | phi-2 | sordonia/flan-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| quarel_heres_a_story | phi-2 | sordonia/flan-10k-flat/quarel_heres_a_story | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not | phi-2 | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| ropes_background_situation_middle | phi-2 | sordonia/flan-10k-flat/ropes_background_situation_middle | lora |
| sciq_Multiple_Choice_Question_First | phi-2 | sordonia/flan-10k-flat/sciq_Multiple_Choice_Question_First | lora |
| cot_strategyqa_ii | phi-2 | sordonia/flan-10k-flat/cot_strategyqa_ii | lora |
| huggingface_xsum | phi-2 | sordonia/flan-10k-flat/huggingface_xsum | lora |
| kilt_tasks_hotpotqa_complex_question | phi-2 | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_complex_question | lora |
| wmt16_translate_fi_en_1_0_0 | phi-2 | sordonia/flan-10k-flat/wmt16_translate_fi_en_1_0_0 | lora |
| ai2_arc_ARC_Easy_1_0_0 | phi-2 | sordonia/flan-10k-flat/ai2_arc_ARC_Easy_1_0_0 | lora |
| stream_qed | phi-2 | sordonia/flan-10k-flat/stream_qed | lora |
| definite_pronoun_resolution_1_1_0 | phi-2 | sordonia/flan-10k-flat/definite_pronoun_resolution_1_1_0 | lora |
| super_glue_rte_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_rte_1_0_2 | lora |
| ropes_new_situation_background_answer | phi-2 | sordonia/flan-10k-flat/ropes_new_situation_background_answer | lora |
| dream_read_the_following_conversation_and_answer_the_question | phi-2 | sordonia/flan-10k-flat/dream_read_the_following_conversation_and_answer_the_question | lora |
| cot_sensemaking | phi-2 | sordonia/flan-10k-flat/cot_sensemaking | lora |
| wiki_qa_Topic_Prediction_Answer_Only | phi-2 | sordonia/flan-10k-flat/wiki_qa_Topic_Prediction_Answer_Only | lora |
| duorc_ParaphraseRC_generate_question | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_generate_question | lora |
| dream_generate_last_utterance | phi-2 | sordonia/flan-10k-flat/dream_generate_last_utterance | lora |
| race_middle_Taking_a_test | phi-2 | sordonia/flan-10k-flat/race_middle_Taking_a_test | lora |
| piqa_1_0_0 | phi-2 | sordonia/flan-10k-flat/piqa_1_0_0 | lora |
| cot_ecqa | phi-2 | sordonia/flan-10k-flat/cot_ecqa | lora |
| glue_mrpc_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_mrpc_2_0_0 | lora |
| race_middle_Read_the_article_and_answer_the_question_no_option_ | phi-2 | sordonia/flan-10k-flat/race_middle_Read_the_article_and_answer_the_question_no_option_ | lora |
| ropes_plain_background_situation | phi-2 | sordonia/flan-10k-flat/ropes_plain_background_situation | lora |
| quail_description_context_question_answer_text | phi-2 | sordonia/flan-10k-flat/quail_description_context_question_answer_text | lora |
| qasc_qa_with_combined_facts_1 | phi-2 | sordonia/flan-10k-flat/qasc_qa_with_combined_facts_1 | lora |
| cot_creak_ii | phi-2 | sordonia/flan-10k-flat/cot_creak_ii | lora |
| duorc_ParaphraseRC_decide_worth_it | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_decide_worth_it | lora |
| quoref_Answer_Test | phi-2 | sordonia/flan-10k-flat/quoref_Answer_Test | lora |
| wiki_bio_who | phi-2 | sordonia/flan-10k-flat/wiki_bio_who | lora |
| kilt_tasks_hotpotqa_formulate | phi-2 | sordonia/flan-10k-flat/kilt_tasks_hotpotqa_formulate | lora |
| glue_wnli_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_wnli_2_0_0 | lora |
| gigaword_1_2_0 | phi-2 | sordonia/flan-10k-flat/gigaword_1_2_0 | lora |
| quail_context_description_question_text | phi-2 | sordonia/flan-10k-flat/quail_context_description_question_text | lora |
| dream_answer_to_dialogue | phi-2 | sordonia/flan-10k-flat/dream_answer_to_dialogue | lora |
| cos_e_v1_11_question_option_description_id | phi-2 | sordonia/flan-10k-flat/cos_e_v1_11_question_option_description_id | lora |
| duorc_ParaphraseRC_question_answering | phi-2 | sordonia/flan-10k-flat/duorc_ParaphraseRC_question_answering | lora |
| wiki_qa_automatic_system | phi-2 | sordonia/flan-10k-flat/wiki_qa_automatic_system | lora |
| adversarial_qa_droberta_based_on | phi-2 | sordonia/flan-10k-flat/adversarial_qa_droberta_based_on | lora |
| super_glue_wsc_fixed_1_0_2 | phi-2 | sordonia/flan-10k-flat/super_glue_wsc_fixed_1_0_2 | lora |
| word_segment | phi-2 | sordonia/flan-10k-flat/word_segment | lora |
| quac_1_0_0 | phi-2 | sordonia/flan-10k-flat/quac_1_0_0 | lora |
| quartz_paragraph_question_plain_concat | phi-2 | sordonia/flan-10k-flat/quartz_paragraph_question_plain_concat | lora |
| wiqa_which_of_the_following_is_the_supposed_perturbation | phi-2 | sordonia/flan-10k-flat/wiqa_which_of_the_following_is_the_supposed_perturbation | lora |
| quartz_use_info_from_question_paragraph | phi-2 | sordonia/flan-10k-flat/quartz_use_info_from_question_paragraph | lora |
| ropes_plain_no_background | phi-2 | sordonia/flan-10k-flat/ropes_plain_no_background | lora |
| race_high_Select_the_best_answer_generate_span_ | phi-2 | sordonia/flan-10k-flat/race_high_Select_the_best_answer_generate_span_ | lora |
| glue_cola_2_0_0 | phi-2 | sordonia/flan-10k-flat/glue_cola_2_0_0 | lora |
| social_i_qa_Show_choices_and_generate_index | phi-2 | sordonia/flan-10k-flat/social_i_qa_Show_choices_and_generate_index | lora |
| ropes_prompt_bottom_hint_beginning | phi-2 | sordonia/flan-10k-flat/ropes_prompt_bottom_hint_beginning | lora |
| stream_qed_ii | phi-2 | sordonia/flan-10k-flat/stream_qed_ii | lora |
Last updated on: 2024-04-19 14:36:04+00:00
|
{}
|
zhan1993/private_library_phi2_epoch_0
| null |
[
"region:us"
] | null |
2024-04-15T12:10:10+00:00
|
[] |
[] |
TAGS
#region-us
|
Number of experts present in the library: 263
|
[] |
[
"TAGS\n#region-us \n"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jayjay19630/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
|
jayjay19630/ppo-Huggy
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null |
2024-04-15T12:12:10+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
## Usage (with ML-Agents)
The Documentation: URL
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your
browser: URL
- A *longer tutorial* to understand how works ML-Agents:
URL
### Resume the training
### Watch your Agent play
You can watch your agent playing directly in your browser
1. If the environment is part of ML-Agents official environments, go to URL
2. Step 1: Find your model_id: jayjay19630/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play
|
[
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: jayjay19630/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: jayjay19630/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pretraining_Test_v3
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/deberta-base", "model-index": [{"name": "Pretraining_Test_v3", "results": []}]}
|
JJ-Tae/Pretraining_Test_v3
| null |
[
"transformers",
"tensorboard",
"safetensors",
"deberta",
"fill-mask",
"generated_from_trainer",
"base_model:microsoft/deberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:12:37+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #deberta #fill-mask #generated_from_trainer #base_model-microsoft/deberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# Pretraining_Test_v3
This model is a fine-tuned version of microsoft/deberta-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# Pretraining_Test_v3\n\nThis model is a fine-tuned version of microsoft/deberta-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #deberta #fill-mask #generated_from_trainer #base_model-microsoft/deberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pretraining_Test_v3\n\nThis model is a fine-tuned version of microsoft/deberta-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small hindi - Rikesh Silwal
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4416
- Wer: 32.5235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0916 | 2.44 | 1000 | 0.2983 | 34.9784 |
| 0.0217 | 4.89 | 2000 | 0.3574 | 33.6367 |
| 0.0011 | 7.33 | 3000 | 0.4180 | 32.6970 |
| 0.0004 | 9.78 | 4000 | 0.4416 | 32.5235 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper small hindi - Rikesh Silwal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "hi", "split": "test", "args": "config: hi, split: test"}, "metrics": [{"type": "wer", "value": 32.52349106916109, "name": "Wer"}]}]}]}
|
RikeshSilwal/whisper-small-hi
| null |
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:13:19+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Whisper small hindi - Rikesh Silwal
===================================
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4416
* Wer: 32.5235
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #hi #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [DT12the/Math-Mixtral-7B](https://huggingface.co/DT12the/Math-Mixtral-7B)
## 🧩 Configuration
```yaml
base_model: allknowingroger/MultiverseEx26-7B-slerp
experts:
- source_model: allknowingroger/MultiverseEx26-7B-slerp
positive_prompts: ["what"]
- source_model: DT12the/Math-Mixtral-7B
positive_prompts: ["math"]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/MultiverseMath-12B-MoE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiverseEx26-7B-slerp", "DT12the/Math-Mixtral-7B"], "base_model": ["allknowingroger/MultiverseEx26-7B-slerp", "DT12the/Math-Mixtral-7B"]}
|
allknowingroger/MultiverseMath-12B-MoE
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"DT12the/Math-Mixtral-7B",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:DT12the/Math-Mixtral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:13:20+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #DT12the/Math-Mixtral-7B #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-DT12the/Math-Mixtral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
* allknowingroger/MultiverseEx26-7B-slerp
* DT12the/Math-Mixtral-7B
## Configuration
## Usage
|
[
"# NeuralPipe-7B-slerp\n\nNeuralPipe-7B-slerp is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* DT12the/Math-Mixtral-7B",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/MultiverseEx26-7B-slerp #DT12the/Math-Mixtral-7B #base_model-allknowingroger/MultiverseEx26-7B-slerp #base_model-DT12the/Math-Mixtral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NeuralPipe-7B-slerp\n\nNeuralPipe-7B-slerp is a Mixture of Experts (MoE) made with the following models using LazyMergekit:\n* allknowingroger/MultiverseEx26-7B-slerp\n* DT12the/Math-Mixtral-7B",
"## Configuration",
"## Usage"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
{"library_name": "peft", "base_model": "deepseek-ai/deepseek-coder-6.7b-instruct"}
|
Sloozi/deepseekv2
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"region:us"
] | null |
2024-04-15T12:13:26+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.1
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] |
[
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-deepseek-ai/deepseek-coder-6.7b-instruct #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] |
text-to-image
|
diffusers
|
# Kippi Ben Kippod [SDXL]
<Gallery />
([CivitAI](https://civitai.com/models/316984))
## Model description
<p>An SDXL LoRA for generating images of <a target="_blank" rel="ugc" href="https://muppet.fandom.com/wiki/Kippi_Ben_Kippod">Kippi Ben Kippod</a></p><p>Use <em>KippiBenKippod</em> in your prompts as a way to refer to this character</p><p>A version for Stable Diffusion v1.5 is <a rel="ugc" href="https://civitai.com/models/316834">available here</a></p>
## Trigger words
You should use `KippiBenKippod` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/kippi-ben-kippod-sdxl/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/kippi-ben-kippod-sdxl', weight_name='Kippi_Ben_Kippod_SDXL.safetensors')
image = pipeline('A vintage magazine cover featuring KippiBenKippod fighting aliens from outer space ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
{"license": "other", "tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora", "migrated", "character", "hedgehog", "sesame street", "kippi"], "license_name": "bespoke-lora-trained-license", "license_link": "https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False", "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "KippiBenKippod", "widget": [{"text": "A photo of KippiBenKippod holding a beer at the pub ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979053.jpeg"}}, {"text": "A KippiBenKippod with butterflies flying above is walking in a sunny field ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979052.jpeg"}}, {"text": "A photo of KippiBenKippod having a bubble bath ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979055.jpeg"}}, {"text": "The Starry Night with KippiBenKippod ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979056.jpeg"}}, {"text": "The girl with a pearl earring with KippiBenKippod ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979054.jpeg"}}, {"text": "A cute dog with KippiBenKippod ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979057.jpeg"}}, {"text": "A photo of KippiBenKippod as a Gladiator fighting in the arena ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979113.jpeg"}}, {"text": "A photo of KippiBenKippod trying to seduce a cute hedgehog ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979059.jpeg"}}, {"text": "A psychedelic painting of KippiBenKippod on LSD Psychemelt style ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979058.jpeg"}}, {"text": "A photo of KippiBenKippod dirty with sea weed at the beach but the old gods are rising ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979062.jpeg"}}, {"text": "A cartoon KippiBenKippod in a Simpstyle living-room ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979060.jpeg"}}, {"text": "A cartoon of a FuturamaStyle KippiBenKippod on a spaceship ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979061.jpeg"}}, {"text": "A vintage magazine cover featuring KippiBenKippod fighting aliens from outer space ", "parameters": {"negative_prompt": "NSFW, Nude, Naked, Bikini, Blurry, Unfocused, Boobs, three arms, three hands, disfigured limbs, detached limbs, duplicated limbs"}, "output": {"url": "6979063.jpeg"}}]}
|
Norod78/kippi-ben-kippod-sdxl
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"character",
"hedgehog",
"sesame street",
"kippi",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | null |
2024-04-15T12:13:47+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #migrated #character #hedgehog #sesame street #kippi #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-other #region-us
|
# Kippi Ben Kippod [SDXL]
<Gallery />
(CivitAI)
## Model description
<p>An SDXL LoRA for generating images of <a target="_blank" rel="ugc" href="URL Ben Kippod</a></p><p>Use <em>KippiBenKippod</em> in your prompts as a way to refer to this character</p><p>A version for Stable Diffusion v1.5 is <a rel="ugc" href="URL here</a></p>
## Trigger words
You should use 'KippiBenKippod' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Use it with the diffusers library
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
|
[
"# Kippi Ben Kippod [SDXL] \n\n<Gallery />\n\n\n\n(CivitAI)",
"## Model description\n\n<p>An SDXL LoRA for generating images of <a target=\"_blank\" rel=\"ugc\" href=\"URL Ben Kippod</a></p><p>Use <em>KippiBenKippod</em> in your prompts as a way to refer to this character</p><p>A version for Stable Diffusion v1.5 is <a rel=\"ugc\" href=\"URL here</a></p>",
"## Trigger words\nYou should use 'KippiBenKippod' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers"
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #migrated #character #hedgehog #sesame street #kippi #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-other #region-us \n",
"# Kippi Ben Kippod [SDXL] \n\n<Gallery />\n\n\n\n(CivitAI)",
"## Model description\n\n<p>An SDXL LoRA for generating images of <a target=\"_blank\" rel=\"ugc\" href=\"URL Ben Kippod</a></p><p>Use <em>KippiBenKippod</em> in your prompts as a way to refer to this character</p><p>A version for Stable Diffusion v1.5 is <a rel=\"ugc\" href=\"URL here</a></p>",
"## Trigger words\nYou should use 'KippiBenKippod' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers"
] |
text-to-image
|
diffusers
|
# golaniyule
<Gallery />
## Model description
Use any checkpoint based on SD1.5
its 768,768 image
turn on Hires.fix (or not, please experiment with it yourself)
Hires steps: 10-17, denoising:04-65 ish.. just try it yourself
CGE: 4-7
Use ERSGAN 4X+Anime 6B, DPM++ 2M Karras or DPM++ SDE Karras
Use 20-50 sampling steps
Strength: 0.4-1
I suggest turn off the restore faces check mark for better results of this Lora
She might look like someone from real life, but she is not what you think. She is a totally fictional character.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/golaniyule/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "UNICODE\u0000\u0000b\u0000e\u0000s\u0000t\u0000 \u0000q\u0000u\u0000a\u0000l\u0000i\u0000t\u0000y\u0000,\u0000 \u0000p\u0000h\u0000o\u0000t\u0000o\u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000,\u0000 \u00008\u0000k\u0000,\u0000 \u0000h\u0000i\u0000g\u0000h\u0000 \u0000r\u0000e\u0000s\u0000,\u0000 \u0000f\u0000u\u0000l\u0000l\u0000 \u0000c\u0000o\u0000l\u0000o\u0000r\u0000,\u0000 \u00001\u0000g\u0000i\u0000r\u0000l\u0000,\u0000 \u0000w\u0000o\u0000m\u0000a\u0000n\u0000,\u0000 \u00002\u00000\u0000 \u0000y\u0000e\u0000a\u0000r\u0000s\u0000 \u0000o\u0000l\u0000d\u0000 \u0000w\u0000o\u0000m\u0000a\u0000n\u0000,\u0000 \u0000(\u0000c\u0000l\u0000o\u0000s\u0000e\u0000d\u0000 \u0000m\u0000o\u0000u\u0000t\u0000h\u0000:\u00001\u0000.\u00004\u00003\u0000)\u0000,\u0000 \u0000(\u0000s\u0000k\u0000i\u0000n\u0000d\u0000e\u0000n\u0000t\u0000a\u0000t\u0000i\u0000o\u0000n\u0000)\u0000,\u0000 \u0000(\u0000p\u0000o\u0000r\u0000t\u0000r\u0000a\u0000i\u0000t\u0000:\u00000\u0000.\u00006\u0000)\u0000,\u0000 \u0000t\u0000r\u0000e\u0000e\u0000s\u0000,\u0000 \u0000p\u0000a\u0000r\u0000k\u0000 \u0000b\u0000e\u0000n\u0000c\u0000h\u0000,\u0000 \u0000d\u0000a\u0000y\u0000l\u0000i\u0000g\u0000h\u0000t\u0000,\u0000 \u0000(\u0000(\u0000p\u0000a\u0000r\u0000k\u0000 \u0000b\u0000a\u0000c\u0000k\u0000g\u0000r\u0000o\u0000u\u0000n\u0000d\u0000:\u00001\u0000.\u00005\u00002\u0000)\u0000)\u0000,\u0000 \u0000f\u0000u\u0000l\u0000l\u0000 \u0000c\u0000o\u0000l\u0000o\u0000r\u0000,\u0000 \u0000(\u0000(\u0000w\u0000h\u0000i\u0000t\u0000e\u0000b\u0000u\u0000t\u0000t\u0000o\u0000n\u0000e\u0000d\u0000s\u0000h\u0000i\u0000r\u0000t\u0000:\u00001\u0000.\u00005\u00008\u0000)\u0000)\u0000,\u0000 \u0000l\u0000o\u0000o\u0000k\u0000i\u0000n\u0000g\u0000 \u0000a\u0000t\u0000 \u0000v\u0000i\u0000e\u0000w\u0000e\u0000r\u0000:\u00001\u0000.\u00008\u0000,\u0000 \u0000(\u00001\u0000g\u0000i\u0000r\u0000l\u0000 \u0000e\u0000y\u0000e\u0000s\u0000 \u0000l\u0000o\u0000o\u0000k\u0000i\u0000n\u0000g\u0000 \u0000a\u0000t\u0000 \u0000v\u0000i\u0000e\u0000w\u0000e\u0000r\u0000:\u00001\u0000.\u00005\u00005\u0000)\u0000,\u0000 \u0000(\u0000m\u0000e\u0000d\u0000i\u0000u\u0000m\u0000 \u0000h\u0000a\u0000i\u0000r\u0000,\u0000 \u0000b\u0000r\u0000o\u0000w\u0000n\u0000h\u0000a\u0000i\u0000r\u0000,\u0000 \u0000p\u0000a\u0000r\u0000t\u0000e\u0000d\u0000b\u0000a\u0000n\u0000g\u0000s\u0000:\u00001\u0000.\u00004\u00005\u0000)\u0000,\u0000 \u0000(\u0000b\u0000o\u0000k\u0000e\u0000h\u0000)\u0000,\u0000 \u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000A\u0000A\u0000G\u0000-\u0000g\u0000o\u0000l\u0000a\u0000n\u0000i\u0000y\u0000u\u0000l\u0000e\u0000:\u00000\u0000.\u00006\u00009\u0000>\u0000", "output": {"url": "images/00004-2347970651.jpeg"}}], "base_model": "runwayml/stable-diffusion-v1-5"}
|
MarkBW/golaniyule
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
] | null |
2024-04-15T12:14:51+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us
|
# golaniyule
<Gallery />
## Model description
Use any checkpoint based on SD1.5
its 768,768 image
turn on URL (or not, please experiment with it yourself)
Hires steps: 10-17, denoising:04-65 ish.. just try it yourself
CGE: 4-7
Use ERSGAN 4X+Anime 6B, DPM++ 2M Karras or DPM++ SDE Karras
Use 20-50 sampling steps
Strength: 0.4-1
I suggest turn off the restore faces check mark for better results of this Lora
She might look like someone from real life, but she is not what you think. She is a totally fictional character.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
|
[
"# golaniyule\n\n<Gallery />",
"## Model description \n\nUse any checkpoint based on SD1.5\n\nits 768,768 image\n\nturn on URL (or not, please experiment with it yourself)\n\nHires steps: 10-17, denoising:04-65 ish.. just try it yourself\n\nCGE: 4-7\n\nUse ERSGAN 4X+Anime 6B, DPM++ 2M Karras or DPM++ SDE Karras\n\nUse 20-50 sampling steps\n\nStrength: 0.4-1\n\nI suggest turn off the restore faces check mark for better results of this Lora\n\nShe might look like someone from real life, but she is not what you think. She is a totally fictional character.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us \n",
"# golaniyule\n\n<Gallery />",
"## Model description \n\nUse any checkpoint based on SD1.5\n\nits 768,768 image\n\nturn on URL (or not, please experiment with it yourself)\n\nHires steps: 10-17, denoising:04-65 ish.. just try it yourself\n\nCGE: 4-7\n\nUse ERSGAN 4X+Anime 6B, DPM++ 2M Karras or DPM++ SDE Karras\n\nUse 20-50 sampling steps\n\nStrength: 0.4-1\n\nI suggest turn off the restore faces check mark for better results of this Lora\n\nShe might look like someone from real life, but she is not what you think. She is a totally fictional character.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# basic-trainer
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2338 | 1.0 | 782 | 2.0671 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "basic-trainer", "results": []}]}
|
gurski/basic-trainer
| null |
[
"safetensors",
"generated_from_trainer",
"region:us"
] | null |
2024-04-15T12:15:30+00:00
|
[] |
[] |
TAGS
#safetensors #generated_from_trainer #region-us
|
basic-trainer
=============
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0671
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#safetensors #generated_from_trainer #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Tokenizers 0.15.2"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Prahas10/roof_classification
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0162
- Validation Loss: 0.2163
- Train Accuracy: 0.8916
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4825, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5019 | 2.0795 | 0.3735 | 0 |
| 1.7660 | 1.7259 | 0.4458 | 1 |
| 1.0922 | 1.0990 | 0.7590 | 2 |
| 0.6402 | 0.8232 | 0.8193 | 3 |
| 0.4725 | 0.6107 | 0.8675 | 4 |
| 0.2674 | 0.4986 | 0.9157 | 5 |
| 0.1794 | 0.5000 | 0.9157 | 6 |
| 0.2579 | 0.7721 | 0.7349 | 7 |
| 0.1269 | 0.3304 | 0.8675 | 8 |
| 0.0970 | 0.2980 | 0.8795 | 9 |
| 0.1181 | 0.4988 | 0.8193 | 10 |
| 0.1241 | 0.2899 | 0.8795 | 11 |
| 0.2311 | 0.4113 | 0.8795 | 12 |
| 0.0753 | 0.2964 | 0.9157 | 13 |
| 0.0637 | 0.4096 | 0.8675 | 14 |
| 0.0540 | 0.3032 | 0.9036 | 15 |
| 0.0334 | 0.2694 | 0.9277 | 16 |
| 0.0212 | 0.1793 | 0.9639 | 17 |
| 0.0241 | 0.3772 | 0.8554 | 18 |
| 0.0471 | 0.5727 | 0.8675 | 19 |
| 0.0652 | 0.3167 | 0.8916 | 20 |
| 0.0281 | 0.2690 | 0.9036 | 21 |
| 0.0478 | 0.2169 | 0.9277 | 22 |
| 0.0193 | 0.2091 | 0.9880 | 23 |
| 0.0162 | 0.2163 | 0.8916 | 24 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch32-384", "model-index": [{"name": "Prahas10/roof_classification", "results": []}]}
|
Prahas10/roof_classification
| null |
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch32-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:20:23+00:00
|
[] |
[] |
TAGS
#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch32-384 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Prahas10/roof\_classification
=============================
This model is a fine-tuned version of google/vit-base-patch32-384 on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0162
* Validation Loss: 0.2163
* Train Accuracy: 0.8916
* Epoch: 24
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 3e-05, 'decay\_steps': 4825, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.0001}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.16.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 4825, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.0001}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch32-384 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 4825, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.0001}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.16.1\n* Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
sok-fm/news_not_news_classifier-v2
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:22:42+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** ramixpe
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-2-13b-chat-hf
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "meta-llama/Llama-2-13b-chat-hf"}
|
ramixpe/r128_a128_2ep
| null |
[
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:24:49+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-meta-llama/Llama-2-13b-chat-hf #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ramixpe
- License: apache-2.0
- Finetuned from model : meta-llama/Llama-2-13b-chat-hf
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: ramixpe\n- License: apache-2.0\n- Finetuned from model : meta-llama/Llama-2-13b-chat-hf\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #text-generation-inference #unsloth #llama #trl #en #base_model-meta-llama/Llama-2-13b-chat-hf #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ramixpe\n- License: apache-2.0\n- Finetuned from model : meta-llama/Llama-2-13b-chat-hf\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | null |
ToolsBaer Gmail Backup Software is an indispensable tool for both individuals and businesses who rely most on their Gmail accounts for emails and important data backups. With this service, users can securely backup all of their Gmail account's contacts, attachments, and other data. Gmail files and folders can be readily backed up and stored in PST, MBOX, EML, and MSG file formats, according to user needs. Any specific folder, including sent, outbox, deleted, tame, and others, can be backed up by users at their freedom. It does not require users to install any software to use the application, including Microsoft Outlook. Users of Windows XP, Vista, 7, 8, 10, 11, and other versions can still effectively use the application. It costs users free to download and use the program.
Read More: - http://www.toolsbaer.com/gmail-backup/
|
{}
|
madelineoliver/ToolsBaer-Gmail-Backup-tool
| null |
[
"region:us"
] | null |
2024-04-15T12:26:26+00:00
|
[] |
[] |
TAGS
#region-us
|
ToolsBaer Gmail Backup Software is an indispensable tool for both individuals and businesses who rely most on their Gmail accounts for emails and important data backups. With this service, users can securely backup all of their Gmail account's contacts, attachments, and other data. Gmail files and folders can be readily backed up and stored in PST, MBOX, EML, and MSG file formats, according to user needs. Any specific folder, including sent, outbox, deleted, tame, and others, can be backed up by users at their freedom. It does not require users to install any software to use the application, including Microsoft Outlook. Users of Windows XP, Vista, 7, 8, 10, 11, and other versions can still effectively use the application. It costs users free to download and use the program.
Read More: - URL
|
[] |
[
"TAGS\n#region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-ft-ttvsplit
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-ft-ttvsplit", "results": []}]}
|
thomasavare/distilbert-ft-ttvsplit
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:27:28+00:00
|
[] |
[] |
TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-ft-ttvsplit
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# distilbert-ft-ttvsplit\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-ft-ttvsplit\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- TensorFlow 2.15.0\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
DeepIQInc/eval_models5cdc18a0-4b5f-48f7-9f70-bd526e226b92
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:28:05+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|

## VAGO solutions SauerkrautLM-Qwen-32b
Introducing **SauerkrautLM-Qwen-32b** – our Sauerkraut version of the powerful [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)!
The model **SauerkrautLM-Qwen-32b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
- Finetuned with **SFT**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Qwen-32b](#all-SauerkrautLM-Qwen-32b)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Qwen-32b
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Qwen-32b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Qwen-32b) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Qwen-32b**
- **Model Type:** SauerkrautLM-Qwen-32b is a finetuned Model based on [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)
- **Language(s):** German, English
- **License:** [tongyi-qianwen-research](https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSEs)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model for 2 epochs on 160k data samples with SFT.
- Afterwards we applied DPO for 1 epoch with 110k data.
- LaserRMT version coming soon
**We teached German language skills on this model.** As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
**English:**
```
<|im_start|>system
You are SauerkrautLM, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
**German:**
```
<|im_start|>system
Du bist SauerkrautLM, ein hilfreicher und freundlicher KI-Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Example output of german language:
```
Die Karte zeigte eine verborgene Höhle, die in den umliegenden Bergen zu finden war. Mia war von der Idee fasziniert, diese Höhle zu entdecken und ihre Geheimnisse zu lüften. Sie nahm die Karte mit sich und machte sich auf den Weg, um die Höhle zu finden.
Die Wanderung zu den Bergen war eine Herausforderung, aber Mia war fest entschlossen, ihr Abenteuer zu vollenden. Sie überwand steinige Wege und überquerte klirrende Bäche, die ihre Füße kühlten und ihr die Energie für den Rest des Weges gab.
Endlich erreichte Mia die Höhle, die von einem dichten Wald umgeben war. Die Höhle war ein Ort der Geheimnisse und des Staunens, der ihr Herz höher schlagen ließ. Sie betrat die Höhle, und die Dunkelheit umhüllte sie wie ein Schleier aus Stille.
In der Höhle fand Mia eine alte Schatzkiste, die mit einem alten, verwitterten Holz verziert war. Mit zitternden Händen öffnete sie die Schatzkiste und fand darin eine alte, zerfledderte Schriftrolle. Die Schriftrolle war ein geheimnisvolles Artefakt, das ihr die Geschichte der Höhle offenbarte.
```
## Evaluation
**Open LLM Leaderboard:**
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **73.11** |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 82.32 |
| MMLU (5-shot) | 74.40|
| TruthfulQA (0-shot) | 61.03 |
| Winogrande (5-shot) | 82.16 |
| GSM8K (5-shot) | 79.53 |
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community
|
{"language": ["de", "en"], "license": "other", "tags": ["sft", "dpo"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE"}
|
blockblockblock/SauerkrautLM-Qwen-32b-bpw4.8
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"sft",
"dpo",
"conversational",
"de",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:28:06+00:00
|
[] |
[
"de",
"en"
] |
TAGS
#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
!SauerkrautLM
VAGO solutions SauerkrautLM-Qwen-32b
------------------------------------
Introducing SauerkrautLM-Qwen-32b – our Sauerkraut version of the powerful Qwen/Qwen1.5-32B!
The model SauerkrautLM-Qwen-32b is a joint effort between VAGO solutions and URL.
* Finetuned with SFT
* Aligned with DPO
Table of Contents
=================
1. Overview of all SauerkrautLM-Qwen-32b
2. Model Details
* Prompt template
* Training procedure
3. Evaluation
4. Disclaimer
5. Contact
6. Collaborations
7. Acknowledgement
All SauerkrautLM-Qwen-32b
-------------------------
Model Details
-------------
SauerkrautLM-Qwen-32b
* Model Type: SauerkrautLM-Qwen-32b is a finetuned Model based on Qwen/Qwen1.5-32B
* Language(s): German, English
* License: tongyi-qianwen-research
* Contact: VAGO solutions, URL
### Training procedure:
* We trained this model for 2 epochs on 160k data samples with SFT.
* Afterwards we applied DPO for 1 epoch with 110k data.
* LaserRMT version coming soon
We teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
English:
German:
### Example output of german language:
Evaluation
----------
Open LLM Leaderboard:
Disclaimer
----------
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
Contact
-------
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
Collaborations
--------------
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer
Acknowledgement
---------------
Many thanks to Qwen for providing such valuable model to the Open-Source community
|
[
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_jobs_skills_p2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3964
- Rouge1: 56.9756
- Rouge2: 34.2311
- Rougel: 56.9100
- Rougelsum: 56.8854
- Gen Len: 3.7743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 187 | 0.5018 | 43.2644 | 20.0376 | 43.2200 | 43.2499 | 3.5588 |
| No log | 2.0 | 375 | 0.4504 | 49.0121 | 26.3671 | 48.9428 | 48.9444 | 3.5626 |
| 0.8598 | 3.0 | 562 | 0.4304 | 51.2228 | 29.2167 | 51.1510 | 51.1381 | 3.6700 |
| 0.8598 | 4.0 | 750 | 0.4183 | 51.5133 | 28.5328 | 51.3758 | 51.3621 | 3.6700 |
| 0.8598 | 5.0 | 937 | 0.4106 | 53.6532 | 31.0763 | 53.5764 | 53.4992 | 3.6591 |
| 0.3439 | 6.0 | 1125 | 0.4010 | 52.7163 | 29.5176 | 52.5233 | 52.5780 | 3.7370 |
| 0.3439 | 7.0 | 1312 | 0.4027 | 54.6573 | 32.0853 | 54.5163 | 54.5007 | 3.6591 |
| 0.2889 | 8.0 | 1500 | 0.3963 | 54.5537 | 31.8771 | 54.4623 | 54.4597 | 3.6475 |
| 0.2889 | 9.0 | 1687 | 0.3952 | 55.0573 | 32.2229 | 54.9448 | 54.9567 | 3.6514 |
| 0.2889 | 10.0 | 1875 | 0.3907 | 55.0968 | 32.9791 | 55.0473 | 55.0184 | 3.7089 |
| 0.248 | 11.0 | 2062 | 0.3915 | 56.5185 | 34.3867 | 56.4045 | 56.4487 | 3.6918 |
| 0.248 | 12.0 | 2250 | 0.3942 | 57.3052 | 34.2798 | 57.2348 | 57.2058 | 3.7689 |
| 0.248 | 13.0 | 2437 | 0.3972 | 55.5294 | 33.1886 | 55.4932 | 55.4813 | 3.7214 |
| 0.2203 | 14.0 | 2625 | 0.3939 | 55.9577 | 33.3766 | 55.8957 | 55.8786 | 3.7479 |
| 0.2203 | 14.96 | 2805 | 0.3964 | 56.9756 | 34.2311 | 56.9100 | 56.8854 | 3.7743 |
### Framework versions
- Transformers 4.27.0
- Pytorch 2.1.2
- Datasets 2.8.0
- Tokenizers 0.13.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5_recommendation_jobs_skills_p2", "results": []}]}
|
mostafa0841/t5_recommendation_jobs_skills_p2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:29:46+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5\_recommendation\_jobs\_skills\_p2
====================================
This model is a fine-tuned version of t5-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3964
* Rouge1: 56.9756
* Rouge2: 34.2311
* Rougel: 56.9100
* Rougelsum: 56.8854
* Gen Len: 3.7743
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.01
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.27.0
* Pytorch 2.1.2
* Datasets 2.8.0
* Tokenizers 0.13.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.01\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.27.0\n* Pytorch 2.1.2\n* Datasets 2.8.0\n* Tokenizers 0.13.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.01\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.27.0\n* Pytorch 2.1.2\n* Datasets 2.8.0\n* Tokenizers 0.13.3"
] |
translation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3776
- Bleu: 29.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "metrics": ["bleu"], "base_model": "Helsinki-NLP/opus-mt-en-ru", "model-index": [{"name": "marian-finetuned-kde4-en-to-ru", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-ru", "split": "train", "args": "en-ru"}, "metrics": [{"type": "bleu", "value": 29.256220948376743, "name": "Bleu"}]}]}]}
|
DonutsHunter/marian-finetuned-kde4-en-to-ru
| null |
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-ru",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:33:38+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #dataset-kde4 #base_model-Helsinki-NLP/opus-mt-en-ru #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# marian-finetuned-kde4-en-to-ru
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ru on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3776
- Bleu: 29.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# marian-finetuned-kde4-en-to-ru\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ru on the kde4 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3776\n- Bleu: 29.2562",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #dataset-kde4 #base_model-Helsinki-NLP/opus-mt-en-ru #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# marian-finetuned-kde4-en-to-ru\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ru on the kde4 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3776\n- Bleu: 29.2562",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
sentence-similarity
|
sentence-transformers
|
# dwulff/mxbai-personality
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('dwulff/mxbai-personality')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=dwulff/mxbai-personality)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3125 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 625,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
dwulff/mxbai-personality
| null |
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:38:29+00:00
|
[] |
[] |
TAGS
#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# dwulff/mxbai-personality
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 3125 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# dwulff/mxbai-personality\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 3125 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# dwulff/mxbai-personality\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 3125 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation
| null |
# CodeQwen1.5-7B-Chat-GGUF
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
In this repo, we provide quantized models in the GGUF formats, including `q2_k`, `q3_k_m`, `q4_0`, `q4_k_m`, `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide.
## How to use
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
```shell
huggingface-cli download Qwen/CodeQwen1.5-7B-Chat-GGUF codeqwen-1_5-7b-chat-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
```
We demonstrate how to use `llama.cpp` to run Qwen1.5:
```shell
./main -m codeqwen-1_5-7b-chat-q5_k_m.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
```
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
{"language": ["en"], "license": "other", "tags": ["chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat-GGUF/blob/main/LICENSE", "pipeline_tag": "text-generation"}
|
Qwen/CodeQwen1.5-7B-Chat-GGUF
| null |
[
"gguf",
"chat",
"text-generation",
"en",
"license:other",
"region:us"
] | null |
2024-04-15T12:38:36+00:00
|
[] |
[
"en"
] |
TAGS
#gguf #chat #text-generation #en #license-other #region-us
|
# CodeQwen1.5-7B-Chat-GGUF
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.
For more details, please refer to our blog post and GitHub repo.
In this repo, we provide quantized models in the GGUF formats, including 'q2_k', 'q3_k_m', 'q4_0', 'q4_k_m', 'q5_0', 'q5_k_m', 'q6_k' and 'q8_0'.
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
We advise you to clone 'URL' and install it following the official guide.
## How to use
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use 'huggingface-cli' ('pip install huggingface_hub') as shown below:
We demonstrate how to use 'URL' to run Qwen1.5:
If you find our work helpful, feel free to give us a cite.
|
[
"# CodeQwen1.5-7B-Chat-GGUF",
"## Introduction\n\nCodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. \n\n* Strong code generation capabilities and competitve performance across a series of benchmarks;\n* Supporting long context understanding and generation with the context length of 64K tokens;\n* Supporting 92 coding languages\n* Excellent performance in text-to-SQL, bug fix, etc.\n\n\nFor more details, please refer to our blog post and GitHub repo. \nIn this repo, we provide quantized models in the GGUF formats, including 'q2_k', 'q3_k_m', 'q4_0', 'q4_k_m', 'q5_0', 'q5_k_m', 'q6_k' and 'q8_0'.",
"## Model Details\nCodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.",
"## Requirements\nWe advise you to clone 'URL' and install it following the official guide.",
"## How to use\nCloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use 'huggingface-cli' ('pip install huggingface_hub') as shown below:\n\n\nWe demonstrate how to use 'URL' to run Qwen1.5:\n\n\n\nIf you find our work helpful, feel free to give us a cite."
] |
[
"TAGS\n#gguf #chat #text-generation #en #license-other #region-us \n",
"# CodeQwen1.5-7B-Chat-GGUF",
"## Introduction\n\nCodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. \n\n* Strong code generation capabilities and competitve performance across a series of benchmarks;\n* Supporting long context understanding and generation with the context length of 64K tokens;\n* Supporting 92 coding languages\n* Excellent performance in text-to-SQL, bug fix, etc.\n\n\nFor more details, please refer to our blog post and GitHub repo. \nIn this repo, we provide quantized models in the GGUF formats, including 'q2_k', 'q3_k_m', 'q4_0', 'q4_k_m', 'q5_0', 'q5_k_m', 'q6_k' and 'q8_0'.",
"## Model Details\nCodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.",
"## Requirements\nWe advise you to clone 'URL' and install it following the official guide.",
"## How to use\nCloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use 'huggingface-cli' ('pip install huggingface_hub') as shown below:\n\n\nWe demonstrate how to use 'URL' to run Qwen1.5:\n\n\n\nIf you find our work helpful, feel free to give us a cite."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
avemio-digital/sauerkraut_combinedata_conversationwiki_v2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:38:51+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# UNfilteredAI-1B
**Model Name**: UNfilteredAI-1B
**Model Type**: Text Generation
## About the Model
The UNfilteredAI-1B model is a large-scale text generation model developed by UnfilteredAI. This model is designed to push the boundaries of creativity and innovation in AI-generated content, without the constraints of traditional content moderation or filtering.
## Key Features
- **Uncensored and Unrestricted**: The UNfilteredAI-1B model is specifically engineered to generate text without any content restrictions or limitations. This allows for the exploration of a wide range of topics and styles, including potentially controversial or sensitive subject matter.
- **Extensive Training**: The model has been trained on a vast corpus of diverse textual data, enabling it to generate highly coherent and contextually relevant content across a broad range of domains.
- **Versatile Applications**: The UNfilteredAI-1B model can be utilized for a variety of text-based tasks, such as creative writing, conversational AI, and even educational or research-oriented applications.
## Intended Use
The UNfilteredAI-1B model is intended for use by experienced and responsible AI researchers, developers, and enthusiasts who are interested in pushing the boundaries of language generation and exploring the potential of uncensored AI technologies.
## Limitations and Ethical Considerations
- **Potential for Misuse**: The uncensored nature of the UNfilteredAI-1B model means that it could be used to generate harmful, unethical, or illegal content. Users should exercise caution and responsibility when utilizing this model.
- **Bias and Inconsistency**: As with many large language models, the UNfilteredAI-1B model may exhibit biases and inconsistencies in its outputs, which could lead to the generation of inaccurate, inappropriate, or even offensive content.
- **Sensitive Content**: The model is capable of generating explicit, adult-oriented, or otherwise sensitive content. Users should be aware of the potential risks and ensure that the model is used in an appropriate and ethical manner.
UnfilteredAI acknowledges the significant ethical considerations and potential risks associated with the development and deployment of uncensored AI models. We encourage users to engage with this model responsibly and to be mindful of the potential impact of their actions.
|
{"language": ["en"], "license": "other", "tags": ["UnfilteredAI"]}
|
UnfilteredAI/UNfilteredAI-1B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"UnfilteredAI",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:38:56+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #UnfilteredAI #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# UNfilteredAI-1B
Model Name: UNfilteredAI-1B
Model Type: Text Generation
## About the Model
The UNfilteredAI-1B model is a large-scale text generation model developed by UnfilteredAI. This model is designed to push the boundaries of creativity and innovation in AI-generated content, without the constraints of traditional content moderation or filtering.
## Key Features
- Uncensored and Unrestricted: The UNfilteredAI-1B model is specifically engineered to generate text without any content restrictions or limitations. This allows for the exploration of a wide range of topics and styles, including potentially controversial or sensitive subject matter.
- Extensive Training: The model has been trained on a vast corpus of diverse textual data, enabling it to generate highly coherent and contextually relevant content across a broad range of domains.
- Versatile Applications: The UNfilteredAI-1B model can be utilized for a variety of text-based tasks, such as creative writing, conversational AI, and even educational or research-oriented applications.
## Intended Use
The UNfilteredAI-1B model is intended for use by experienced and responsible AI researchers, developers, and enthusiasts who are interested in pushing the boundaries of language generation and exploring the potential of uncensored AI technologies.
## Limitations and Ethical Considerations
- Potential for Misuse: The uncensored nature of the UNfilteredAI-1B model means that it could be used to generate harmful, unethical, or illegal content. Users should exercise caution and responsibility when utilizing this model.
- Bias and Inconsistency: As with many large language models, the UNfilteredAI-1B model may exhibit biases and inconsistencies in its outputs, which could lead to the generation of inaccurate, inappropriate, or even offensive content.
- Sensitive Content: The model is capable of generating explicit, adult-oriented, or otherwise sensitive content. Users should be aware of the potential risks and ensure that the model is used in an appropriate and ethical manner.
UnfilteredAI acknowledges the significant ethical considerations and potential risks associated with the development and deployment of uncensored AI models. We encourage users to engage with this model responsibly and to be mindful of the potential impact of their actions.
|
[
"# UNfilteredAI-1B\n\nModel Name: UNfilteredAI-1B\nModel Type: Text Generation",
"## About the Model\n\nThe UNfilteredAI-1B model is a large-scale text generation model developed by UnfilteredAI. This model is designed to push the boundaries of creativity and innovation in AI-generated content, without the constraints of traditional content moderation or filtering.",
"## Key Features\n\n- Uncensored and Unrestricted: The UNfilteredAI-1B model is specifically engineered to generate text without any content restrictions or limitations. This allows for the exploration of a wide range of topics and styles, including potentially controversial or sensitive subject matter.\n- Extensive Training: The model has been trained on a vast corpus of diverse textual data, enabling it to generate highly coherent and contextually relevant content across a broad range of domains.\n- Versatile Applications: The UNfilteredAI-1B model can be utilized for a variety of text-based tasks, such as creative writing, conversational AI, and even educational or research-oriented applications.",
"## Intended Use\n\nThe UNfilteredAI-1B model is intended for use by experienced and responsible AI researchers, developers, and enthusiasts who are interested in pushing the boundaries of language generation and exploring the potential of uncensored AI technologies.",
"## Limitations and Ethical Considerations\n\n- Potential for Misuse: The uncensored nature of the UNfilteredAI-1B model means that it could be used to generate harmful, unethical, or illegal content. Users should exercise caution and responsibility when utilizing this model.\n- Bias and Inconsistency: As with many large language models, the UNfilteredAI-1B model may exhibit biases and inconsistencies in its outputs, which could lead to the generation of inaccurate, inappropriate, or even offensive content.\n- Sensitive Content: The model is capable of generating explicit, adult-oriented, or otherwise sensitive content. Users should be aware of the potential risks and ensure that the model is used in an appropriate and ethical manner.\n\nUnfilteredAI acknowledges the significant ethical considerations and potential risks associated with the development and deployment of uncensored AI models. We encourage users to engage with this model responsibly and to be mindful of the potential impact of their actions."
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #UnfilteredAI #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# UNfilteredAI-1B\n\nModel Name: UNfilteredAI-1B\nModel Type: Text Generation",
"## About the Model\n\nThe UNfilteredAI-1B model is a large-scale text generation model developed by UnfilteredAI. This model is designed to push the boundaries of creativity and innovation in AI-generated content, without the constraints of traditional content moderation or filtering.",
"## Key Features\n\n- Uncensored and Unrestricted: The UNfilteredAI-1B model is specifically engineered to generate text without any content restrictions or limitations. This allows for the exploration of a wide range of topics and styles, including potentially controversial or sensitive subject matter.\n- Extensive Training: The model has been trained on a vast corpus of diverse textual data, enabling it to generate highly coherent and contextually relevant content across a broad range of domains.\n- Versatile Applications: The UNfilteredAI-1B model can be utilized for a variety of text-based tasks, such as creative writing, conversational AI, and even educational or research-oriented applications.",
"## Intended Use\n\nThe UNfilteredAI-1B model is intended for use by experienced and responsible AI researchers, developers, and enthusiasts who are interested in pushing the boundaries of language generation and exploring the potential of uncensored AI technologies.",
"## Limitations and Ethical Considerations\n\n- Potential for Misuse: The uncensored nature of the UNfilteredAI-1B model means that it could be used to generate harmful, unethical, or illegal content. Users should exercise caution and responsibility when utilizing this model.\n- Bias and Inconsistency: As with many large language models, the UNfilteredAI-1B model may exhibit biases and inconsistencies in its outputs, which could lead to the generation of inaccurate, inappropriate, or even offensive content.\n- Sensitive Content: The model is capable of generating explicit, adult-oriented, or otherwise sensitive content. Users should be aware of the potential risks and ensure that the model is used in an appropriate and ethical manner.\n\nUnfilteredAI acknowledges the significant ethical considerations and potential risks associated with the development and deployment of uncensored AI models. We encourage users to engage with this model responsibly and to be mindful of the potential impact of their actions."
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-37-layer](https://huggingface.co/Citaman/command-r-37-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-37-layer
layer_range: [0, 36]
- model: Citaman/command-r-37-layer
layer_range: [1, 37]
merge_method: slerp
base_model: Citaman/command-r-37-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-37-layer"]}
|
Citaman/command-r-36-layer
| null |
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-37-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:39:17+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-37-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-37-layer
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-37-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-37-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-37-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "274.32 +/- 18.37", "name": "mean_reward", "verified": false}]}]}]}
|
keshav-kumar/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T12:39:56+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# Experiment26Mergerix-7B
Experiment26Mergerix-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: yam-peleg/Experiment26-7B
- model: MiniMoog/Mergerix-7b-v0.3
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment26Mergerix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
|
automerger/Experiment26Mergerix-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T12:40:41+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Experiment26Mergerix-7B
Experiment26Mergerix-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
|
[
"# Experiment26Mergerix-7B\n\nExperiment26Mergerix-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Experiment26Mergerix-7B\n\nExperiment26Mergerix-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-to-image
|
diffusers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "diffusers"}
|
Niggendar/RealHentai_V20
| null |
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-15T12:41:11+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
summarization
|
peft
|
# Model Card for Model ID
## Model Details
### Model Description
Summarise Korean sentences concisely
- **Developed by:** [Kang Seok Ju]
- **Contact:** [[email protected]]
## Training Details
### Training Data
https://huggingface.co/datasets/brildev7/polite_summary_by_gpt4
# Inference Examples
```
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
model_id = "google/gemma-7b"
peft_model_id = "brildev7/gemma-7b-polite-summarization-ko-sft-qlora"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_quant_type="nf4"
)
model = AutoModelForCausalLM.from_pretrained(model_id,
quantization_config=quantization_config,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2",
device_map="auto")
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
# example
prompt_template = "다음 글을 요약하세요.:{}\n요약:"
passage = "기획재정부는 20일 이 같은 내용의 '주류 면허 등에 관한 법률 시행령' 개정안을 입법 예고했다. 개정안에는 주류 판매업 면허 취소의 예외에 해당하는 주류의 단순가공·조작의 범위를 술잔 등 빈 용기에 주류를 나눠 담아 판매하는 경우 등이 포함됐다. 식당·주점 등에서 주류를 판매할 때 술을 잔에 나눠 판매할 수 있다는 의미다. 종합주류도매업자가 주류제조자 등이 제조·판매하는 비알코올 음료 또는 무알코올 음료를 주류와 함께 음식점 등에 공급할 수 있도록 주류판매 전업의무 면허요건도 완화했다. 현재 알코올 도수가 0%인 음료는 '무알코올 음료'로, 0% 이상 1% 미만인 것은 '비알코올 음료'로 구분된다. 현행 규정상 무알코올·비알코올 주류는 주류 업자가 유통할 수 없는데 이 규정을 완화한다는 것이다. 기재부는 다음 달 29일까지 의견 수렴을 거쳐 이르면 다음 달 말부터 시행할 예정이다."
prompt = prompt_template.format(passage)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs,
max_new_tokens=1024,
temperature=0.2,
top_p=0.95,
do_sample=True,
use_cache=False)
print(tokenizer.decode(outputs[0]))
- 20일 기획재정부는 '주류 면허 등에 관한 법률 시행령' 개정안을 입법 예고했으며, 개정안에는 주류 판매업 면허 취소의 예외로 주류의 단순가공·조작의 범위를 술잔 등의 용기에 나누어 판매하는 경우가 포함되어 있습니다. 또한, 종합주류도매업자가 주류제조자 등이 제조·판매하는 비알코올 음료 또는 무알코올 음료를 주류와 함께 음식점에 공급 가능해지게 되었습니다.
# example
prompt_template = "다음 글을 요약하세요.:{}\n요약:"
passage = "지난 1월 일본 오사카 우메다의 뷰티샵 ‘앳코스메’에서 진행된 CJ올리브영의 메이크업 브랜드(PB) ‘바이오힐 보’의 팝업 스토어 현장. 오사카 최대 규모를 자랑하는 앳코스메 매장 한 가운데 꾸며진 팝업 스토어에는 한국에서 인기 높은 화장품을 실제로 경험해보려는 고객들로 발 디딜 틈 없이 북적거렸다. 타이완 국적자이지만 오사카에서 거주하고 있다는 32살 쿠이잉씨는 이날 팝업 스토어를 찾아 바이오힐 보의 ‘탄탄크림’을 구매했다. 사회관계망서비스(SNS)와 유튜브를 통해 한국 화장품이 좋다는 평을 들어본 터라 이번 기회에 구매해 사용해보기로 결심했다고 한다. 쿠이잉씨는 한국 화장품을 쓰면 한국 여성처럼 예뻐지지 않을까 기대가 된다고 말했다. 이날 앳코스메는 바이오힐 보 팝업 뿐만 아니라 눈에 잘 띄는 메인 진열대 상당수가 한국 브랜드 차지였다. 대부분 한국에서도 인기가 높은 브랜드들로, 입구에서 바로 보이는 진열대에는 ‘웨이크메이크’와 ‘피치씨’, ‘어뮤즈’가, 해외 명품 브랜드 존 정중앙에는 ‘헤라’가 자리하고 있었다. 일본 내 K뷰티의 인기가 예사롭지 않다. ‘제 3차 한류붐’이라고까지 일컬어지는 한류열풍을 타고 일본 내 K뷰티의 입지가 나날이 치솟고 있다. 과거에는 일본 내에서 한국 문화를 좋아하는 일부 소비자들 사이에서만 유행하는 수준이었다면, 지금은 일본 뷰티 시장에 하나의 카테고리로 K뷰티가 자리를 잡았다는 평가다. 21일 베인앤드컴퍼니와 유로모니터에 따르면 K뷰티의 일본 지역별 침투율(특정 기간 동안 특정 상품 소비 규모 비중)은 2017년 1%에서 2022년 4.9%로 5년 만에 5배가 증가했다. 최근 3년간 연평균 성장률은 20%가 넘는다. 지난해에는 일본 수입 화장품 국가별 비중에서 한국이 처음으로 프랑스를 제치고 1위에 오르기도 했다. 서효주 베인앤드컴퍼니 파트너는 지금보다 3~4배 이상 성장할 여력이 충분하다고 말했다. 일본 여성들이 K뷰티에 매료된 이유는 무엇일까. 가장 큰 이유로는 ‘높은 가성비(가격 대비 성능)’가 꼽힌다. 업계에 따르면 실제 일본에서 많이 판매되는 한국 화장품 브랜드의 기초제품들은 일본 브랜드에 비해 제품 가격이 10~20% 가량 저렴한 편이다. 이는 한국콜마와 코스맥스 같은 국내 화장품 OEM(주문자 상표 부착 생산)·ODM(주문자 개발생산) 제조사들의 성장 덕이 크다. 이들의 기술력은 세계 최고 수준으로, 세계 최대 화장품 기업인 로레알도 고객사일 정도다. 이들은 단순 제품 제조를 넘어 신제품을 개발해 브랜드에 먼저 제안하고 또 필요시 마케팅까지 지원해 브랜드를 키우는 서비스를 제공하고 있다. 한국 뷰티 브랜드 대부분이 이들을 통해 제품을 만들고 있어 중소 규모 K뷰티 브랜드도 품질이 보장된다는 얘기다. 또 K뷰티 제품의 강점으로는 △독특하고 트렌디한 컨셉 △발빠른 신제품 출시 △예쁜 패키지 등이 거론된다. 이를 방증하듯 최근 일본에선 위의 강점들을 갖춘 한국의 신진 메이크업 브랜드들이 인기다. 실제로 일본 내 트위터와 유튜브 등 SNS에서는 수십~수백만 팔로워를 보유한 현지 인플루언서들도 일명 ‘내돈내산’(내 돈 주고 내가 산 물건) 영상에서 자발적으로 K뷰티 메이크업 브랜드 제품을 소개하고 있다. 지난 1월 일본 오사카에 소재한 뷰티 랭킹샵 ‘앳코스메 우메다점’에서 일본 여성들이 한국 코스메틱 브랜드 ‘라카(Laka)’의 제품을 살펴보고 있는 모습. [김효혜 기자] 대표적인 예가 ‘라카’다. 한국보다 일본에서 더 유명한 라카는 100만 구독자를 보유하고 있는 메이크업 아티스트이자 유튜버 ‘히로’(오다기리 히로)가 영상에서 제품을 추천해 홍보 효과를 톡톡히 봤다. 이민미 라카 대표는 일본에서 특정 제품이 갑자기 하루에 수천개가 팔려 무슨 일인가 봤는데, 현지 유명 유튜버가 추천한 영상이 올라왔더라며 협찬이나 광고가 아니어서 더 놀랐다고 말했다. 이에 지난 2020년 처음 일본에 진출한 라카는 올해 1월 말 일본 전역 약 350여개 매장에 입점하는 성과를 올렸다. 2021년 47억원에 불과했던 라카의 매출도 지난해 4배가 넘게 상승해 200억원에 육박한다. 일본 시장에서 두각을 보이는 국내 화장품 브랜드들이 늘면서 새롭게 진출을 타진하거나 준비하고 있는 업체들도 늘고 있다. 그동안 한국 화장품의 가장 큰 시장이었던 중국이 경기 침체 및 정치적 이슈 등으로 쪼그라들고 있는 상황에서 일본이 이를 대체할 새로운 시장으로 부상한 것이다. 일본 화장품 판매 채널들도 K뷰티 유치에 적극적이다. 앳코스메의 경우 거의 매달 K뷰티 팝업이 열리고 있는 수준으로, 오는 5월에는 도쿄점에서 K뷰티 페스티벌도 열 계획이다. 로프트와 프라자 등도 K뷰티 유치 경쟁이 뜨겁다. CJ올리브영 관계자는 한국 화장품에 대한 반응이 좋고 특히 올리브영에서 인기 있는 브랜드에 대한 수요가 높다 보니 플랫폼에서 먼저 팝업 요청이 왔다며 앞으로도 일본 시장 유통에 더욱 적극적으로 나서려 한다고 전했다."
prompt = prompt_template.format(passage)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs,
max_new_tokens=512,
temperature=1,
top_p=0.95,
do_sample=True,
use_cache=False)
print(tokenizer.decode(outputs[0]))
- 오사카 뷰티샵 ‘앳코스메’에서는 한국에서 메이크업 브랜드인 ‘바이오힐 보’의 팝업 스토어를 열어 고객들이 경험해보고자 하며 인기를 끌고 있고, 또한 주변에는 한국 브랜드들이 많이 배치되어 있어 K뷰티가 일본에서 하나의 카테고리로 자리잡고 있다고 말씀드릴 수 있습니다.
# example
prompt_template = "다음 글을 요약하세요.:{}\n요약:"
passage = "유엔 안전보장이사회가 14일(현지시간) 이스라엘의 요청으로 긴급회의를 소집하고 이란의 군사 공격에 대해 논의했다. 이란과 이스라엘은 이 자리에서 치열한 설전을 벌였고, 회원국들은 확전 방지를 위해 당사국들의 자제를 촉구했다. 가디언 등에 따르면 이날 안보리 회의에서 이란과 이스라엘 대사는 서로를 겨냥해 중동 평화의 위협이라고 강하게 비난했다. 아미르 사에이드 이라바니 주유엔 이란 대사는 이번 공격과 관련해 “국제법에 따른 자위권을 행사할 수밖에 없었던 상황”이라면서 “이란은 중동지역 긴장을 고조시키거나 전쟁을 추구하지 않는다는 일관된 입장을 가지고 있다”고 말했다. 이번 공격은 지난 1일 이스라엘이 주시리아 이란 영사관을 공격한 데 대한 대응이었다는 점을 강조한 것이다. 이라바니 대사는 “이스라엘 정권의 추가적인 군사적 도발에 대해 경고하고자 한다”며 “이란은 국민과 국가안보, 주권, 영토를 방어하기 위한 단호한 결의를 가지고 있음을 단언한다”고 말했다. 길라드 에르단 주유엔 이스라엘 대사는 “이란의 군대는 하마스와 헤즈볼라, 후티, 혁명수비대, 그 외 야만적인 지하디스트(이슬람 성전주의자)를 포함한다”며 “이스라엘의 방공시스템이 우월한 것으로 증명됐다고 해서 이란의 잔혹한 공격이 바뀌는 것은 아니다. 이란은 더는 대리자 뒤에 숨지 말아야 한다”고 말했다. 그러면서 “안보리는 행동에 나서야 한다”며 “이란의 테러 행위를 비난하고 스냅백 메커니즘(핵협정 등을 위반했을 때 제재를 부활하는 것)을 작동해 이란 혁명수비대를 테러단체로 지정해야 한다”고 안보리 제재를 촉구했다. 국제사회는 중동지역의 확전을 우려하면서 자제를 요청했다. 안토니우 구테흐스 유엔 사무총장은 이날 “중동 주민들은 파괴적인 전면전의 실제 위험에 직면하고 있다”며 “지금은 진정하고 긴장을 완화하면서 최대한 자제해야 하는 시기”라고 말했다. 로버트 우드 주유엔 미국 차석대사는 “안보리는 명백히 이란의 공격 행위를 비난하고 이란 및 이란의 파트너와 대리자들에게 공격을 멈춰야 한다고 촉구해야 한다”고 말했다. 반면 이란, 시리아, 러시아, 중국 대사는 이스라엘의 미사일·드론 요격을 도운 미국 등 동맹국을 비판했다. 또 이스라엘이 시리아 주재 이란 영사관을 공격한 것에 대해서는 미국 등이 비판하지 않는다고도 지적했다. 이날 안보리는 이란의 공격을 규탄하는 공동성명을 발표하거나 제재를 가하는 등 조치 없이 종료됐다."
prompt = prompt_template.format(passage)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs,
max_new_tokens=512,
temperature=1,
top_p=0.95,
do_sample=True,
use_cache=False)
print(tokenizer.decode(outputs[0]))
- 14일(현지시간) 유엔 안전보장이사회가 이스라엘의 요청으로 긴급회의를 소집하여 이란과 이스라엘 대사가 서로를 겨냥해 중동 평화의 위협이라고 강하게 비난하는 논의를 벌였으나, 국제사회는 중동지역의 확전을 우려하며 당사국들의 자제를 촉구하였지만 결국 조치 없이 종료되었습니다.
```
|
{"language": ["ko"], "library_name": "peft", "tags": ["summarization", "gemma"], "base_model": "google/gemma-7b"}
|
brildev7/gemma-7b-polite-summarization-ko-sft-qlora
| null |
[
"peft",
"safetensors",
"summarization",
"gemma",
"ko",
"base_model:google/gemma-7b",
"region:us"
] | null |
2024-04-15T12:41:43+00:00
|
[] |
[
"ko"
] |
TAGS
#peft #safetensors #summarization #gemma #ko #base_model-google/gemma-7b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
Summarise Korean sentences concisely
- Developed by: [Kang Seok Ju]
- Contact: [brildev7@URL]
## Training Details
### Training Data
URL
# Inference Examples
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\nSummarise Korean sentences concisely\n- Developed by: [Kang Seok Ju]\n- Contact: [brildev7@URL]",
"## Training Details",
"### Training Data\nURL",
"# Inference Examples"
] |
[
"TAGS\n#peft #safetensors #summarization #gemma #ko #base_model-google/gemma-7b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\nSummarise Korean sentences concisely\n- Developed by: [Kang Seok Ju]\n- Contact: [brildev7@URL]",
"## Training Details",
"### Training Data\nURL",
"# Inference Examples"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results3
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5832
- Accuracy: 0.7133
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "xlnet-base-cased", "model-index": [{"name": "results3", "results": []}]}
|
dianamihalache27/results3
| null |
[
"transformers",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:45:01+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #xlnet #text-classification #generated_from_trainer #base_model-xlnet-base-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# results3
This model is a fine-tuned version of xlnet-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5832
- Accuracy: 0.7133
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results3\n\nThis model is a fine-tuned version of xlnet-base-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5832\n- Accuracy: 0.7133\n- F1: 0.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #xlnet #text-classification #generated_from_trainer #base_model-xlnet-base-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# results3\n\nThis model is a fine-tuned version of xlnet-base-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5832\n- Accuracy: 0.7133\n- F1: 0.0",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning
|
ml-agents
|
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
|
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
|
expilu/ppo-Huggy
| null |
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null |
2024-04-15T12:46:27+00:00
|
[] |
[] |
TAGS
#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
|
# ppo Agent playing Huggy
This is a trained model of a ppo agent playing Huggy
using the Unity ML-Agents Library.
|
[
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library."
] |
[
"TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n",
"# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library."
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_kor_math
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3294
## Model description
한국어 수학 문제를 입력하면, 문제 유형과 문제 유형에 대한 설명, 풀이(코드), 정답이 출력되도록 fine tuning 했습니다.</br>
문제 유형 종류로는 산술연산, 순서정하기, 조합하기, 수 찾기, 크기 비교, 도형이 있습니다.</br>
아직 원인은 잘 모르겠지만, 정확도가 높지는 않아보입니다..</br>
## Intended uses & limitations
## Training and evaluation data
[TUNiB.ai](https://tunib.ai/)에서 [github](https://github.com/tunib-ai/KMWP)에 공개한 train 데이터 셋으로 학습하였습니다.</br>
## Training procedure
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.56 | 100 | 3.5725 |
| No log | 1.13 | 200 | 1.2367 |
| No log | 1.69 | 300 | 0.7100 |
| No log | 2.26 | 400 | 0.5420 |
| 2.4974 | 2.82 | 500 | 0.5891 |
| 2.4974 | 3.39 | 600 | 0.5370 |
| 2.4974 | 3.95 | 700 | 0.4738 |
| 2.4974 | 4.52 | 800 | 0.4985 |
| 2.4974 | 5.08 | 900 | 0.4540 |
| 0.3445 | 5.65 | 1000 | 0.4439 |
| 0.3445 | 6.21 | 1100 | 0.4261 |
| 0.3445 | 6.78 | 1200 | 0.4007 |
| 0.3445 | 7.34 | 1300 | 0.3739 |
| 0.3445 | 7.91 | 1400 | 0.3937 |
| 0.26 | 8.47 | 1500 | 0.3550 |
| 0.26 | 9.04 | 1600 | 0.3623 |
| 0.26 | 9.6 | 1700 | 0.3944 |
| 0.26 | 10.17 | 1800 | 0.3669 |
| 0.26 | 10.73 | 1900 | 0.3628 |
| 0.217 | 11.3 | 2000 | 0.3703 |
| 0.217 | 11.86 | 2100 | 0.3580 |
| 0.217 | 12.43 | 2200 | 0.3318 |
| 0.217 | 12.99 | 2300 | 0.3199 |
| 0.217 | 13.56 | 2400 | 0.3537 |
| 0.1916 | 14.12 | 2500 | 0.3198 |
| 0.1916 | 14.69 | 2600 | 0.3317 |
| 0.1916 | 15.25 | 2700 | 0.3333 |
| 0.1916 | 15.82 | 2800 | 0.3280 |
| 0.1916 | 16.38 | 2900 | 0.3269 |
| 0.1737 | 16.95 | 3000 | 0.3315 |
| 0.1737 | 17.51 | 3100 | 0.3346 |
| 0.1737 | 18.08 | 3200 | 0.3290 |
| 0.1737 | 18.64 | 3300 | 0.3317 |
| 0.1737 | 19.21 | 3400 | 0.3282 |
| 0.1637 | 19.77 | 3500 | 0.3294 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gogamza/kobart-base-v2", "model-index": [{"name": "qa_kor_math", "results": []}]}
|
idah4/qa_kor_math
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:gogamza/kobart-base-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:48:59+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-gogamza/kobart-base-v2 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
qa\_kor\_math
=============
This model is a fine-tuned version of gogamza/kobart-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3294
Model description
-----------------
한국어 수학 문제를 입력하면, 문제 유형과 문제 유형에 대한 설명, 풀이(코드), 정답이 출력되도록 fine tuning 했습니다.
문제 유형 종류로는 산술연산, 순서정하기, 조합하기, 수 찾기, 크기 비교, 도형이 있습니다.
아직 원인은 잘 모르겠지만, 정확도가 높지는 않아보입니다..
Intended uses & limitations
---------------------------
Training and evaluation data
----------------------------
TUNiB.ai에서 github에 공개한 train 데이터 셋으로 학습하였습니다.
Training procedure
------------------
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 400
* num\_epochs: 20
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-gogamza/kobart-base-v2 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/stabilityai/StableBeluga2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/StableBeluga2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/StableBeluga2-i1-GGUF/resolve/main/StableBeluga2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "datasets": ["conceptofmind/cot_submix_original", "conceptofmind/flan2021_submix_original", "conceptofmind/t0_submix_original", "conceptofmind/niv2_submix_original"], "base_model": "stabilityai/StableBeluga2", "quantized_by": "mradermacher"}
|
mradermacher/StableBeluga2-i1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"base_model:stabilityai/StableBeluga2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:50:41+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #dataset-conceptofmind/cot_submix_original #dataset-conceptofmind/flan2021_submix_original #dataset-conceptofmind/t0_submix_original #dataset-conceptofmind/niv2_submix_original #base_model-stabilityai/StableBeluga2 #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #dataset-conceptofmind/cot_submix_original #dataset-conceptofmind/flan2021_submix_original #dataset-conceptofmind/t0_submix_original #dataset-conceptofmind/niv2_submix_original #base_model-stabilityai/StableBeluga2 #endpoints_compatible #region-us \n"
] |
text-generation
|
adapter-transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["en"], "license": "apache-2.0", "library_name": "adapter-transformers", "datasets": ["mahekjasani/kkkk"], "pipeline_tag": "text-generation"}
|
kvrma/kkkkkk
| null |
[
"adapter-transformers",
"cohere",
"text-generation",
"en",
"dataset:mahekjasani/kkkk",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T12:51:59+00:00
|
[
"1910.09700"
] |
[
"en"
] |
TAGS
#adapter-transformers #cohere #text-generation #en #dataset-mahekjasani/kkkk #arxiv-1910.09700 #license-apache-2.0 #region-us
|
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#adapter-transformers #cohere #text-generation #en #dataset-mahekjasani/kkkk #arxiv-1910.09700 #license-apache-2.0 #region-us \n",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
pandafm/donutES-vf3
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:54:52+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-fine-tuned-large-v2-company-earnings-call-v0
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 5.0 | 10 | 0.0465 | 7.0243 |
| No log | 10.0 | 20 | 0.0028 | 0.0 |
| 0.0962 | 15.0 | 30 | 0.0013 | 0.0 |
| 0.0962 | 20.0 | 40 | 0.0010 | 0.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "openai/whisper-large-v2", "model-index": [{"name": "Whisper-fine-tuned-large-v2-company-earnings-call-v0", "results": []}]}
|
MasatoShima1618/Whisper-fine-tuned-large-v2-company-earnings-call-v0
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T12:57:07+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-large-v2 #license-apache-2.0 #endpoints_compatible #region-us
|
Whisper-fine-tuned-large-v2-company-earnings-call-v0
====================================================
This model is a fine-tuned version of openai/whisper-large-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0010
* Wer: 0.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* distributed\_type: multi-GPU
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 5
* training\_steps: 40
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* training\\_steps: 40\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #base_model-openai/whisper-large-v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* training\\_steps: 40\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-36-layer](https://huggingface.co/Citaman/command-r-36-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-36-layer
layer_range: [0, 35]
- model: Citaman/command-r-36-layer
layer_range: [1, 36]
merge_method: slerp
base_model: Citaman/command-r-36-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-36-layer"]}
|
Citaman/command-r-35-layer
| null |
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-36-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:02:26+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-36-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-36-layer
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-36-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-36-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-36-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-to-image
|
diffusers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "diffusers"}
|
Niggendar/kakarot25dchichi
| null |
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-15T13:06:00+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
heyllm234/sc23
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:08:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "outputs", "results": []}]}
|
Star3073/outputs
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null |
2024-04-15T13:11:33+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
# outputs
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.2
|
[
"# outputs\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.14.7\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.14.7\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range:
- 0
- 32
- model: openchat/openchat-3.5-0106
layer_range:
- 0
- 32
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.2", "openchat/openchat-3.5-0106"]}
|
bingbort/mergekit-slerp-vehkdva
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:openchat/openchat-3.5-0106",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:12:37+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-openchat/openchat-3.5-0106 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* mistralai/Mistral-7B-Instruct-v0.2
* openchat/openchat-3.5-0106
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2\n* openchat/openchat-3.5-0106",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-openchat/openchat-3.5-0106 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2\n* openchat/openchat-3.5-0106",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
MoM: Mixture of Mixture
This Model is a test to combine [Jamba](https://huggingface.co/ai21labs/Jamba-v0.1) architecture with 1.58 bits linear layers **excpted for attention layer**, mixture of attention head and mixture of depth.
The goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.
Only 17.8M parameter over 1025 is in bf16 precision wich is ~ 1.7% of the total number of parameters
- **Model type:** Mixture of attention head mixture of depth and mixture of expert 1.58bit linear layers **excepted for attention layer**
- **License:** Apache licence 2.0
### Model Sources [optional]
- **Repository:** https://github.com/ostix360/optimized-LLM
## How to Get Started with the Model
If you want to test this model please look at this repo at this [commit](https://github.com/ostix360/optimized-LLM/tree/04cae61fb252a5927756c86ec0efde32d0dd3794)
## Training Details
- **wandb**: [training detail](https://wandb.ai/ostix360/Mixture%20of%20mixture%20(mod,%20moah%20moe)/runs/68hieuwt)
### Training Data
We use the first 100k data of Locutusque/UltraTextbooks to train this model
### Training Procedure
We use adam-8 bits with default betas and epsilon values
#### Preprocessing [optional]
The data fit the model max length i.e. 512 tokens
#### Training Hyperparameters
Please look at the wandb metadata file or the train.py file in the repo to see the hyperparameters
## Technical Specifications [optional]
### Compute Infrastructure
#### Hardware
- one 4070 ti GPU
#### Software
- pytorch, transformers etc
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "moah", "mod"], "datasets": ["Locutusque/UltraTextbooks"]}
|
Ostixe360/MoMv3-mixed-precision
| null |
[
"transformers",
"safetensors",
"text-generation",
"moe",
"moah",
"mod",
"en",
"dataset:Locutusque/UltraTextbooks",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:12:40+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation #moe #moah #mod #en #dataset-Locutusque/UltraTextbooks #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
MoM: Mixture of Mixture
This Model is a test to combine Jamba architecture with 1.58 bits linear layers excpted for attention layer, mixture of attention head and mixture of depth.
The goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.
Only 17.8M parameter over 1025 is in bf16 precision wich is ~ 1.7% of the total number of parameters
- Model type: Mixture of attention head mixture of depth and mixture of expert 1.58bit linear layers excepted for attention layer
- License: Apache licence 2.0
### Model Sources [optional]
- Repository: URL
## How to Get Started with the Model
If you want to test this model please look at this repo at this commit
## Training Details
- wandb: training detail/runs/68hieuwt)
### Training Data
We use the first 100k data of Locutusque/UltraTextbooks to train this model
### Training Procedure
We use adam-8 bits with default betas and epsilon values
#### Preprocessing [optional]
The data fit the model max length i.e. 512 tokens
#### Training Hyperparameters
Please look at the wandb metadata file or the URL file in the repo to see the hyperparameters
## Technical Specifications [optional]
### Compute Infrastructure
#### Hardware
- one 4070 ti GPU
#### Software
- pytorch, transformers etc
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nMoM: Mixture of Mixture\n\nThis Model is a test to combine Jamba architecture with 1.58 bits linear layers excpted for attention layer, mixture of attention head and mixture of depth.\n\nThe goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.\n\nOnly 17.8M parameter over 1025 is in bf16 precision wich is ~ 1.7% of the total number of parameters\n\n\n- Model type: Mixture of attention head mixture of depth and mixture of expert 1.58bit linear layers excepted for attention layer\n- License: Apache licence 2.0",
"### Model Sources [optional]\n\n\n- Repository: URL",
"## How to Get Started with the Model\n\n\nIf you want to test this model please look at this repo at this commit",
"## Training Details\n\n - wandb: training detail/runs/68hieuwt)",
"### Training Data\n\nWe use the first 100k data of Locutusque/UltraTextbooks to train this model",
"### Training Procedure\n\nWe use adam-8 bits with default betas and epsilon values",
"#### Preprocessing [optional]\n\n\nThe data fit the model max length i.e. 512 tokens",
"#### Training Hyperparameters\n\nPlease look at the wandb metadata file or the URL file in the repo to see the hyperparameters",
"## Technical Specifications [optional]",
"### Compute Infrastructure",
"#### Hardware\n\n- one 4070 ti GPU",
"#### Software\n\n- pytorch, transformers etc"
] |
[
"TAGS\n#transformers #safetensors #text-generation #moe #moah #mod #en #dataset-Locutusque/UltraTextbooks #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nMoM: Mixture of Mixture\n\nThis Model is a test to combine Jamba architecture with 1.58 bits linear layers excpted for attention layer, mixture of attention head and mixture of depth.\n\nThe goal is to developpe and test if this kind of architectures have not too much quality loss for a fast inference.\n\nOnly 17.8M parameter over 1025 is in bf16 precision wich is ~ 1.7% of the total number of parameters\n\n\n- Model type: Mixture of attention head mixture of depth and mixture of expert 1.58bit linear layers excepted for attention layer\n- License: Apache licence 2.0",
"### Model Sources [optional]\n\n\n- Repository: URL",
"## How to Get Started with the Model\n\n\nIf you want to test this model please look at this repo at this commit",
"## Training Details\n\n - wandb: training detail/runs/68hieuwt)",
"### Training Data\n\nWe use the first 100k data of Locutusque/UltraTextbooks to train this model",
"### Training Procedure\n\nWe use adam-8 bits with default betas and epsilon values",
"#### Preprocessing [optional]\n\n\nThe data fit the model max length i.e. 512 tokens",
"#### Training Hyperparameters\n\nPlease look at the wandb metadata file or the URL file in the repo to see the hyperparameters",
"## Technical Specifications [optional]",
"### Compute Infrastructure",
"#### Hardware\n\n- one 4070 ti GPU",
"#### Software\n\n- pytorch, transformers etc"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
|
mergekit-community/mergekit-slerp-rcoqutv
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:16:30+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* NousResearch/Hermes-2-Pro-Mistral-7B
* WizardLM/WizardMath-7B-V1.1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null | null |
# Inspire-7B-slerp
Inspire-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
## 🧩 Configuration
\```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
\```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "cognitivecomputations/dolphin-2.8-mistral-7b-v02"]}
|
tvkkishore/backup
| null |
[
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T13:17:58+00:00
|
[] |
[] |
TAGS
#merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #EmbeddedLLM/Mistral-7B-Merge-14-v0.1 #cognitivecomputations/dolphin-2.8-mistral-7b-v02 #license-apache-2.0 #region-us
|
# Inspire-7B-slerp
Inspire-7B-slerp is a merge of the following models using mergekit:
* mistralai/Mistral-7B-Instruct-v0.2
* EmbeddedLLM/Mistral-7B-Merge-14-v0.1
* cognitivecomputations/dolphin-2.8-mistral-7b-v02
## Configuration
\
|
[
"# Inspire-7B-slerp\n\nInspire-7B-slerp is a merge of the following models using mergekit:\n* mistralai/Mistral-7B-Instruct-v0.2\n* EmbeddedLLM/Mistral-7B-Merge-14-v0.1\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"## Configuration\n\n\\"
] |
[
"TAGS\n#merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #EmbeddedLLM/Mistral-7B-Merge-14-v0.1 #cognitivecomputations/dolphin-2.8-mistral-7b-v02 #license-apache-2.0 #region-us \n",
"# Inspire-7B-slerp\n\nInspire-7B-slerp is a merge of the following models using mergekit:\n* mistralai/Mistral-7B-Instruct-v0.2\n* EmbeddedLLM/Mistral-7B-Merge-14-v0.1\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"## Configuration\n\n\\"
] |
null |
transformers
|
# bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF
This model was converted to GGUF format from [`bingbort/mergekit-slerp-vehkdva`](https://huggingface.co/bingbort/mergekit-slerp-vehkdva) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bingbort/mergekit-slerp-vehkdva) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF --model mergekit-slerp-vehkdva.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF --model mergekit-slerp-vehkdva.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mergekit-slerp-vehkdva.Q8_0.gguf -n 128
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.2", "openchat/openchat-3.5-0106"]}
|
bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF
| null |
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:openchat/openchat-3.5-0106",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:18:48+00:00
|
[] |
[] |
TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-openchat/openchat-3.5-0106 #endpoints_compatible #region-us
|
# bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF
This model was converted to GGUF format from 'bingbort/mergekit-slerp-vehkdva' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF\nThis model was converted to GGUF format from 'bingbort/mergekit-slerp-vehkdva' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-openchat/openchat-3.5-0106 #endpoints_compatible #region-us \n",
"# bingbort/mergekit-slerp-vehkdva-Q8_0-GGUF\nThis model was converted to GGUF format from 'bingbort/mergekit-slerp-vehkdva' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
feature-extraction
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlong-t5-tglobal-base
This model is a fine-tuned version of [agemagician/mlong-t5-tglobal-base](https://huggingface.co/agemagician/mlong-t5-tglobal-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1553
- Rouge1: 32.0603
- Rouge2: 13.4985
- Rougel: 24.0775
- Rougelsum: 25.9692
- Gen Len: 72.828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Gen Len | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 500 | 18.987 | 2.2709 | 20.5043 | 8.1518 | 16.9526 | 17.5001 |
| 2.8714 | 2.0 | 1000 | 18.982 | 2.2022 | 21.4051 | 8.7445 | 17.7534 | 18.3191 |
| 2.8714 | 3.0 | 1500 | 18.99 | 2.1608 | 21.6609 | 9.1753 | 18.0374 | 18.6176 |
| 2.5137 | 4.0 | 2000 | 18.993 | 2.1555 | 21.6818 | 9.1814 | 18.0382 | 18.6198 |
| 2.5137 | 5.0 | 2500 | 18.994 | 2.1462 | 21.9708 | 9.2033 | 18.3919 | 18.9535 |
| 2.3717 | 6.0 | 3000 | 18.996 | 2.1258 | 22.0583 | 9.2987 | 18.4379 | 19.0322 |
| 2.3717 | 7.0 | 3500 | 18.989 | 2.1278 | 21.8245 | 9.0474 | 18.1979 | 18.8038 |
| 2.2633 | 8.0 | 4000 | 18.996 | 2.1207 | 21.6273 | 8.8847 | 18.024 | 18.6049 |
| 2.2633 | 9.0 | 4500 | 18.994 | 2.1180 | 22.2004 | 9.6253 | 18.6373 | 19.1721 |
| 2.1886 | 10.0 | 5000 | 18.988 | 2.1220 | 22.1619 | 9.6206 | 18.5069 | 19.0856 |
| 2.1886 | 11.0 | 5500 | 18.987 | 2.1161 | 22.1518 | 9.4522 | 18.4695 | 19.0552 |
| 2.1144 | 12.0 | 6000 | 18.995 | 2.1103 | 22.0395 | 9.4185 | 18.4314 | 19.0305 |
| 2.1144 | 13.0 | 6500 | 18.992 | 2.1150 | 22.2404 | 9.4722 | 18.5482 | 19.1747 |
| 2.054 | 14.0 | 7000 | 19.0 | 2.1091 | 22.1466 | 9.3434 | 18.3443 | 18.9233 |
| 2.0526 | 1.0 | 8000 | 62.488 | 2.1580 | 30.4149 | 12.0774 | 22.9493 | 24.4478 |
| 2.1236 | 2.0 | 16000 | 64.797 | 2.1621 | 31.3101 | 13.3237 | 23.8249 | 25.526 |
| 2.0776 | 3.0 | 24000 | 57.059 | 2.1607 | 30.9902 | 12.3753 | 23.0243 | 24.8308 |
| 1.9843 | 4.0 | 32000 | 72.828 | 2.1553 | 32.0603 | 13.4985 | 24.0775 | 25.9692 |
### Framework versions
- Transformers 4.38.2
- Pytorch 1.13.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "agemagician/mlong-t5-tglobal-base", "model-index": [{"name": "mlong-t5-tglobal-base", "results": []}]}
|
hesum-anonymous/mT5LongHeSum-base
| null |
[
"transformers",
"safetensors",
"longt5",
"feature-extraction",
"generated_from_trainer",
"base_model:agemagician/mlong-t5-tglobal-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:19:19+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #longt5 #feature-extraction #generated_from_trainer #base_model-agemagician/mlong-t5-tglobal-base #license-apache-2.0 #endpoints_compatible #region-us
|
mlong-t5-tglobal-base
=====================
This model is a fine-tuned version of agemagician/mlong-t5-tglobal-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1553
* Rouge1: 32.0603
* Rouge2: 13.4985
* Rougel: 24.0775
* Rougelsum: 25.9692
* Gen Len: 72.828
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 1
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 1.13.1+cu117
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 1.13.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #longt5 #feature-extraction #generated_from_trainer #base_model-agemagician/mlong-t5-tglobal-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 1.13.1+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
* [TheDrummer/Moistral-11B-v2](https://huggingface.co/TheDrummer/Moistral-11B-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
---
models:
- model: TheDrummer/Moistral-11B-v2
parameters:
weight: 0.3
- model: Sao10K/Fimbulvetr-11B-v2
parameters:
weight: 0.7
merge_method: linear
dtype: float16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Sao10K/Fimbulvetr-11B-v2", "TheDrummer/Moistral-11B-v2"]}
|
Tokerss/mergekit-linear-cnukgdw
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"base_model:TheDrummer/Moistral-11B-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:20:52+00:00
|
[
"2203.05482"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2203.05482 #base_model-Sao10K/Fimbulvetr-11B-v2 #base_model-TheDrummer/Moistral-11B-v2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the linear merge method.
### Models Merged
The following models were included in the merge:
* Sao10K/Fimbulvetr-11B-v2
* TheDrummer/Moistral-11B-v2
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the linear merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Sao10K/Fimbulvetr-11B-v2\n* TheDrummer/Moistral-11B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #arxiv-2203.05482 #base_model-Sao10K/Fimbulvetr-11B-v2 #base_model-TheDrummer/Moistral-11B-v2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the linear merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Sao10K/Fimbulvetr-11B-v2\n* TheDrummer/Moistral-11B-v2",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-to-image
|
diffusers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "diffusers"}
|
Niggendar/HardcoreAsianPorn_v20
| null |
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-15T13:21:14+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
starchat-alpha - bnb 4bits
- Model creator: https://huggingface.co/HuggingFaceH4/
- Original model: https://huggingface.co/HuggingFaceH4/starchat-alpha/
Original model description:
---
license: bigcode-openrail-m
datasets:
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
language:
- en
library_name: transformers
tags:
- code
---
# Model Card for StarChat Alpha
<!-- Provide a quick summary of what the model is/does. -->
_Note, you may be interested in the Beta version of StarChat [here](https://huggingface.co/HuggingFaceH4/starchat-beta)._
StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A 16B parameter GPT-like model fine-tuned on a blend of the [`oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) and [`databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets.
- **Language(s) (NLP):** English
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoderbase](https://huggingface.co/bigcode/starcoderbase)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bigcode-project/starcoder
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model [StarCoder Base](https://huggingface.co/bigcode/starcoderbase), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderbase#limitations) for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).
## How to Get Started with the Model
Use the code below to get started with the model.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-alpha", torch_dtype=torch.bfloat16, device_map="auto")
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Tunstall2023starchat-alpha,
author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
title = {Creating a Coding Assistant with StarCoder},
journal = {Hugging Face Blog},
year = {2023},
note = {https://huggingface.co/blog/starchat},
}
```
|
{}
|
RichardErkhov/HuggingFaceH4_-_starchat-alpha-4bits
| null |
[
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-15T13:22:31+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #gpt_bigcode #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
Quantization made by Richard Erkhov.
Github
Discord
Request more models
starchat-alpha - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: bigcode-openrail-m
datasets:
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
language:
- en
library_name: transformers
tags:
- code
---
# Model Card for StarChat Alpha
_Note, you may be interested in the Beta version of StarChat here._
StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).
## Model Details
### Model Description
- Model type: A 16B parameter GPT-like model fine-tuned on a blend of the 'oasst1' and 'databricks-dolly-15k' datasets.
- Language(s) (NLP): English
- License: BigCode Open RAIL-M v1
- Finetuned from model: bigcode/starcoderbase
### Model Sources [optional]
- Repository: URL
- Demo: URL
## Uses
StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.
## Bias, Risks, and Limitations
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.
## How to Get Started with the Model
Use the code below to get started with the model.
Here's how you can run the model using the 'pipeline()' function from Transformers:
\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\URL()\nprint(numbers)\n
BibTeX:
|
[
"# Model Card for StarChat Alpha\n\n\n_Note, you may be interested in the Beta version of StarChat here._\n\nStarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).",
"## Model Details",
"### Model Description\n\n\n\n- Model type: A 16B parameter GPT-like model fine-tuned on a blend of the 'oasst1' and 'databricks-dolly-15k' datasets.\n- Language(s) (NLP): English\n- License: BigCode Open RAIL-M v1\n- Finetuned from model: bigcode/starcoderbase",
"### Model Sources [optional]\n\n\n\n- Repository: URL\n- Demo: URL",
"## Uses\n\n\n\nStarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.",
"## Bias, Risks, and Limitations\n\n\n\nStarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). \nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. \nFor example, it may produce code that does not compile or that produces incorrect results. \nIt may also produce code that is vulnerable to security exploits. \nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\nStarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information. \nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\\URL()\\nprint(numbers)\\n\n\n\n\nBibTeX:"
] |
[
"TAGS\n#transformers #safetensors #gpt_bigcode #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for StarChat Alpha\n\n\n_Note, you may be interested in the Beta version of StarChat here._\n\nStarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).",
"## Model Details",
"### Model Description\n\n\n\n- Model type: A 16B parameter GPT-like model fine-tuned on a blend of the 'oasst1' and 'databricks-dolly-15k' datasets.\n- Language(s) (NLP): English\n- License: BigCode Open RAIL-M v1\n- Finetuned from model: bigcode/starcoderbase",
"### Model Sources [optional]\n\n\n\n- Repository: URL\n- Demo: URL",
"## Uses\n\n\n\nStarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.",
"## Bias, Risks, and Limitations\n\n\n\nStarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). \nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. \nFor example, it may produce code that does not compile or that produces incorrect results. \nIt may also produce code that is vulnerable to security exploits. \nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\nStarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information. \nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\\URL()\\nprint(numbers)\\n\n\n\n\nBibTeX:"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-35-layer](https://huggingface.co/Citaman/command-r-35-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-35-layer
layer_range: [0, 34]
- model: Citaman/command-r-35-layer
layer_range: [1, 35]
merge_method: slerp
base_model: Citaman/command-r-35-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-35-layer"]}
|
Citaman/command-r-34-layer
| null |
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-35-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:24:07+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-35-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-35-layer
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-35-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-35-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-35-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
dotvignesh/TAVGen-Explain-7b
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:24:12+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# NeverSleep's [Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss) 2 bit HQQ quant. Now with METAOFFLOAD!
This is another [HQQ Quantization](https://github.com/mobiusml/hqq) of Noromaid Mixtral, the same as [ProphetOfBostrom's earlier work](https://huggingface.co/ProphetOfBostrom/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss_attn-4bit-moe-2bit-HQQ) but with Metadata Offloading enabled, which wasn't available when they made their quant.
The result is a version of Noromaid Mixtral that runs in **13.68 Gigabytes!** *Well within reach of us GPU-Poor folk*
The quant is largely untested and I haven't gotten around to benchmarking against the base model yet, but give it a go and let me know how you get on!

---
# Disclaimer:
## This model is experimental, do not expect everything to work.
This model uses the Chatml **prompting format**
---
Beeg noromaid on ***steroids***. Suitable for RP, ERP.
This model was trained on the Zloss fork of Charles, and should fix issue the model had.
Use Chatml prompt format, but not the special token.
The reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available [HERE](https://github.com/DocShotgun/LLM-notebooks/blob/main/weighted-lora-merge.ipynb) to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.
## Credits:
- Undi
- IkariDev
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
### Prompt format: Chatml
```
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
```
## Datasets used:
- Aesir 1, 2 & 3 modified by us, credit to ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe))
- [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia))
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) ([NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)
- [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun))
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "moe", "mixtral", "hqq", "text-generation-inference", "conversational"], "inference": false}
|
Chronal/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss_attn-4bit-moe-2bit-metaoffload-HQQ
| null |
[
"transformers",
"mixtral",
"text-generation",
"not-for-all-audiences",
"moe",
"hqq",
"text-generation-inference",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] | null |
2024-04-15T13:24:28+00:00
|
[] |
[] |
TAGS
#transformers #mixtral #text-generation #not-for-all-audiences #moe #hqq #text-generation-inference #conversational #license-cc-by-nc-4.0 #autotrain_compatible #region-us
|
# NeverSleep's Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss 2 bit HQQ quant. Now with METAOFFLOAD!
This is another HQQ Quantization of Noromaid Mixtral, the same as ProphetOfBostrom's earlier work but with Metadata Offloading enabled, which wasn't available when they made their quant.
The result is a version of Noromaid Mixtral that runs in 13.68 Gigabytes! *Well within reach of us GPU-Poor folk*
The quant is largely untested and I haven't gotten around to benchmarking against the base model yet, but give it a go and let me know how you get on!
!image/png
---
# Disclaimer:
## This model is experimental, do not expect everything to work.
This model uses the Chatml prompting format
---
Beeg noromaid on *steroids*. Suitable for RP, ERP.
This model was trained on the Zloss fork of Charles, and should fix issue the model had.
Use Chatml prompt format, but not the special token.
The reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available HERE to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.
FP16 - by IkariDev and Undi
GGUF - by IkariDev and Undi
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
### Prompt format: Chatml
## Datasets used:
- Aesir 1, 2 & 3 modified by us, credit to (MinervaAI / Gryphe)
- LimaRP-20231109 (Lemonilia)
- ToxicQAFinal (NobodyExistsOnTheInternet
- No-robots-ShareGPT (Doctor-Shotgun)
## Others
Undi: If you want to support me, you can here.
IkariDev: Visit my retro/neocities style website please kek
|
[
"# NeverSleep's Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss 2 bit HQQ quant. Now with METAOFFLOAD!\n\nThis is another HQQ Quantization of Noromaid Mixtral, the same as ProphetOfBostrom's earlier work but with Metadata Offloading enabled, which wasn't available when they made their quant.\nThe result is a version of Noromaid Mixtral that runs in 13.68 Gigabytes! *Well within reach of us GPU-Poor folk*\n\nThe quant is largely untested and I haven't gotten around to benchmarking against the base model yet, but give it a go and let me know how you get on!\n\n!image/png\n\n\n\n---",
"# Disclaimer:",
"## This model is experimental, do not expect everything to work.\n\nThis model uses the Chatml prompting format\n\n---\n\n\nBeeg noromaid on *steroids*. Suitable for RP, ERP.\n\nThis model was trained on the Zloss fork of Charles, and should fix issue the model had.\n\nUse Chatml prompt format, but not the special token.\n\nThe reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available HERE to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\n\n\nThis repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.\n\nFP16 - by IkariDev and Undi\n\n\n\n\n\n\n\n\n\n\n\nGGUF - by IkariDev and Undi",
"## Ratings:\n\nNote: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!\n\nNo ratings yet!\n\nIf you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is \"ikaridev\" and \"undi\".",
"### Prompt format: Chatml",
"## Datasets used:\n\n- Aesir 1, 2 & 3 modified by us, credit to (MinervaAI / Gryphe)\n- LimaRP-20231109 (Lemonilia)\n- ToxicQAFinal (NobodyExistsOnTheInternet\n- No-robots-ShareGPT (Doctor-Shotgun)",
"## Others\n\nUndi: If you want to support me, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] |
[
"TAGS\n#transformers #mixtral #text-generation #not-for-all-audiences #moe #hqq #text-generation-inference #conversational #license-cc-by-nc-4.0 #autotrain_compatible #region-us \n",
"# NeverSleep's Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss 2 bit HQQ quant. Now with METAOFFLOAD!\n\nThis is another HQQ Quantization of Noromaid Mixtral, the same as ProphetOfBostrom's earlier work but with Metadata Offloading enabled, which wasn't available when they made their quant.\nThe result is a version of Noromaid Mixtral that runs in 13.68 Gigabytes! *Well within reach of us GPU-Poor folk*\n\nThe quant is largely untested and I haven't gotten around to benchmarking against the base model yet, but give it a go and let me know how you get on!\n\n!image/png\n\n\n\n---",
"# Disclaimer:",
"## This model is experimental, do not expect everything to work.\n\nThis model uses the Chatml prompting format\n\n---\n\n\nBeeg noromaid on *steroids*. Suitable for RP, ERP.\n\nThis model was trained on the Zloss fork of Charles, and should fix issue the model had.\n\nUse Chatml prompt format, but not the special token.\n\nThe reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available HERE to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.",
"## Credits:\n- Undi\n- IkariDev",
"## Description\n\n\n\nThis repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.\n\nFP16 - by IkariDev and Undi\n\n\n\n\n\n\n\n\n\n\n\nGGUF - by IkariDev and Undi",
"## Ratings:\n\nNote: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!\n\nNo ratings yet!\n\nIf you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is \"ikaridev\" and \"undi\".",
"### Prompt format: Chatml",
"## Datasets used:\n\n- Aesir 1, 2 & 3 modified by us, credit to (MinervaAI / Gryphe)\n- LimaRP-20231109 (Lemonilia)\n- ToxicQAFinal (NobodyExistsOnTheInternet\n- No-robots-ShareGPT (Doctor-Shotgun)",
"## Others\n\nUndi: If you want to support me, you can here.\n\nIkariDev: Visit my retro/neocities style website please kek"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned-mental-health-conversational
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "ybelkada/falcon-7b-sharded-bf16", "model-index": [{"name": "falcon-7b-sharded-bf16-finetuned-mental-health-conversational", "results": []}]}
|
ssalogin/sdflkjwljfhwkhuhwueidw
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null |
2024-04-15T13:25:21+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us
|
# falcon-7b-sharded-bf16-finetuned-mental-health-conversational
This model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# falcon-7b-sharded-bf16-finetuned-mental-health-conversational\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 320",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-ybelkada/falcon-7b-sharded-bf16 #region-us \n",
"# falcon-7b-sharded-bf16-finetuned-mental-health-conversational\n\nThis model is a fine-tuned version of ybelkada/falcon-7b-sharded-bf16 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 320",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
these little one are easy to train for task !!! ::
They already have some training (not great)
But they can take more and more
(and being MISTRAL they can takes lora modules!)
Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and
See it it took hold then merge IT!
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_MiniTron
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "LeroyDyer/Mixtral_AI_MiniTron"}
|
LeroyDyer/Mixtral_AI_MiniTron_Chat
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:LeroyDyer/Mixtral_AI_MiniTron",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:26:36+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-LeroyDyer/Mixtral_AI_MiniTron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
these little one are easy to train for task !!! ::
They already have some training (not great)
But they can take more and more
(and being MISTRAL they can takes lora modules!)
Rememeber to add training on to the lora you merge withit : ie load the lora and train a few cycle on the same data that was applied in the p=lora (ie 20 Steps ) and
See it it took hold then merge IT!
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-LeroyDyer/Mixtral_AI_MiniTron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_MiniTron\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
automatic-speech-recognition
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Abhinay123/wav2vec2_vedas2_epoch_5_step_1399
| null |
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:29:19+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
feature-extraction
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlong-t5-tglobal-large
This model is a fine-tuned version of [agemagician/mlong-t5-tglobal-large](https://huggingface.co/agemagician/mlong-t5-tglobal-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9297
- Rouge1: 33.2751
- Rouge2: 14.7214
- Rougel: 25.0329
- Rougelsum: 26.9804
- Gen Len: 63.937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Gen Len | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLSum |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5919 | 1.0 | 1050 | 61.5895 | 1.9940 | 30.603 | 12.7279 | 22.8958 | 24.5756 |
| 2.3025 | 2.0 | 2100 | 96.4781 | 1.9429 | 30.2088 | 12.8612 | 22.4477 | 24.6023 |
| 2.1456 | 3.0 | 3150 | 80.6381 | 1.8979 | 31.4743 | 13.8002 | 23.6389 | 25.7835 |
| 1.9977 | 4.0 | 4200 | 72.9752 | 1.8858 | 32.3099 | 14.3439 | 24.3416 | 26.2897 |
| 1.9059 | 5.0 | 5250 | 68.4971 | 1.8878 | 32.2531 | 14.0683 | 24.3766 | 26.1912 |
| 1.8521 | 6.0 | 6300 | 68.9524 | 1.8892 | 32.3429 | 14.0016 | 24.2874 | 26.3216 |
| 1.7472 | 7.0 | 7000 | 60.46 | 1.8865 | 32.8966 | 14.8847 | 25.1771 | 26.9613 |
| 1.7018 | 8.0 | 8000 | 65.807 | 1.8858 | 32.6402 | 14.4404 | 24.6794 | 26.5654 |
| 1.6337 | 9.0 | 9000 | 79.875 | 1.9019 | 32.2069 | 13.8683 | 24.0734 | 26.353 |
| 1.5773 | 10.0 | 10000 | 65.88 | 1.9043 | 32.8499 | 14.5395 | 24.8736 | 26.9515 |
| 1.5238 | 11.0 | 11000 | 63.208 | 1.9148 | 32.8182 | 14.322 | 24.7011 | 26.5718 |
| 1.4779 | 12.0 | 12000 | 63.937 | 1.9297 | 33.2751 | 14.7214 | 25.0329 | 26.9804 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "agemagician/mlong-t5-tglobal-large", "model-index": [{"name": "mlong-t5-tglobal-large", "results": []}]}
|
hesum-anonymous/mT5LongHeSum-large
| null |
[
"transformers",
"safetensors",
"longt5",
"feature-extraction",
"generated_from_trainer",
"base_model:agemagician/mlong-t5-tglobal-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:29:22+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #longt5 #feature-extraction #generated_from_trainer #base_model-agemagician/mlong-t5-tglobal-large #license-apache-2.0 #endpoints_compatible #region-us
|
mlong-t5-tglobal-large
======================
This model is a fine-tuned version of agemagician/mlong-t5-tglobal-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9297
* Rouge1: 33.2751
* Rouge2: 14.7214
* Rougel: 25.0329
* Rougelsum: 26.9804
* Gen Len: 63.937
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.2.0+cu121
* Datasets 2.16.1
* Tokenizers 0.15.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #safetensors #longt5 #feature-extraction #generated_from_trainer #base_model-agemagician/mlong-t5-tglobal-large #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.1"
] |
text-generation
|
transformers
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
starchat-alpha - bnb 8bits
- Model creator: https://huggingface.co/HuggingFaceH4/
- Original model: https://huggingface.co/HuggingFaceH4/starchat-alpha/
Original model description:
---
license: bigcode-openrail-m
datasets:
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
language:
- en
library_name: transformers
tags:
- code
---
# Model Card for StarChat Alpha
<!-- Provide a quick summary of what the model is/does. -->
_Note, you may be interested in the Beta version of StarChat [here](https://huggingface.co/HuggingFaceH4/starchat-beta)._
StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A 16B parameter GPT-like model fine-tuned on a blend of the [`oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) and [`databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets.
- **Language(s) (NLP):** English
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoderbase](https://huggingface.co/bigcode/starcoderbase)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bigcode-project/starcoder
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model [StarCoder Base](https://huggingface.co/bigcode/starcoderbase), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderbase#limitations) for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).
## How to Get Started with the Model
Use the code below to get started with the model.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-alpha", torch_dtype=torch.bfloat16, device_map="auto")
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Tunstall2023starchat-alpha,
author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
title = {Creating a Coding Assistant with StarCoder},
journal = {Hugging Face Blog},
year = {2023},
note = {https://huggingface.co/blog/starchat},
}
```
|
{}
|
RichardErkhov/HuggingFaceH4_-_starchat-alpha-8bits
| null |
[
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null |
2024-04-15T13:30:27+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #gpt_bigcode #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|
Quantization made by Richard Erkhov.
Github
Discord
Request more models
starchat-alpha - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: bigcode-openrail-m
datasets:
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
language:
- en
library_name: transformers
tags:
- code
---
# Model Card for StarChat Alpha
_Note, you may be interested in the Beta version of StarChat here._
StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).
## Model Details
### Model Description
- Model type: A 16B parameter GPT-like model fine-tuned on a blend of the 'oasst1' and 'databricks-dolly-15k' datasets.
- Language(s) (NLP): English
- License: BigCode Open RAIL-M v1
- Finetuned from model: bigcode/starcoderbase
### Model Sources [optional]
- Repository: URL
- Demo: URL
## Uses
StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.
## Bias, Risks, and Limitations
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.
## How to Get Started with the Model
Use the code below to get started with the model.
Here's how you can run the model using the 'pipeline()' function from Transformers:
\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\URL()\nprint(numbers)\n
BibTeX:
|
[
"# Model Card for StarChat Alpha\n\n\n_Note, you may be interested in the Beta version of StarChat here._\n\nStarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).",
"## Model Details",
"### Model Description\n\n\n\n- Model type: A 16B parameter GPT-like model fine-tuned on a blend of the 'oasst1' and 'databricks-dolly-15k' datasets.\n- Language(s) (NLP): English\n- License: BigCode Open RAIL-M v1\n- Finetuned from model: bigcode/starcoderbase",
"### Model Sources [optional]\n\n\n\n- Repository: URL\n- Demo: URL",
"## Uses\n\n\n\nStarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.",
"## Bias, Risks, and Limitations\n\n\n\nStarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). \nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. \nFor example, it may produce code that does not compile or that produces incorrect results. \nIt may also produce code that is vulnerable to security exploits. \nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\nStarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information. \nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\\URL()\\nprint(numbers)\\n\n\n\n\nBibTeX:"
] |
[
"TAGS\n#transformers #safetensors #gpt_bigcode #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Model Card for StarChat Alpha\n\n\n_Note, you may be interested in the Beta version of StarChat here._\n\nStarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so).",
"## Model Details",
"### Model Description\n\n\n\n- Model type: A 16B parameter GPT-like model fine-tuned on a blend of the 'oasst1' and 'databricks-dolly-15k' datasets.\n- Language(s) (NLP): English\n- License: BigCode Open RAIL-M v1\n- Finetuned from model: bigcode/starcoderbase",
"### Model Sources [optional]\n\n\n\n- Repository: URL\n- Demo: URL",
"## Uses\n\n\n\nStarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models.",
"## Bias, Risks, and Limitations\n\n\n\nStarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). \nModels trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack.\n\n\nSince the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. \nFor example, it may produce code that does not compile or that produces incorrect results. \nIt may also produce code that is vulnerable to security exploits. \nWe have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.\n\nStarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information. \nIn particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:\n\n\\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\\URL()\\nprint(numbers)\\n\n\n\n\nBibTeX:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OwOOwO/dumbo-krillin5
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:32:01+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotions-shixm
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3951
- Accuracy: 0.844
- F1: 0.8429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9695 | 1.0 | 250 | 0.4927 | 0.8245 | 0.8222 |
| 0.4398 | 2.0 | 500 | 0.3951 | 0.8375 | 0.8359 |
| 0.3643 | 3.0 | 750 | 0.3792 | 0.845 | 0.8437 |
| 0.3211 | 4.0 | 1000 | 0.3872 | 0.8445 | 0.8427 |
| 0.2938 | 5.0 | 1250 | 0.3951 | 0.844 | 0.8429 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.18.0
- Tokenizers 0.15.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotions-shixm", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.844, "name": "Accuracy"}, {"type": "f1", "value": 0.8428933768777381, "name": "F1"}]}]}]}
|
sam102400/distilbert-base-uncased-finetuned-emotions-shixm
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:32:11+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotions-shixm
================================================
This model is a fine-tuned version of distilbert/distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3951
* Accuracy: 0.844
* F1: 0.8429
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.2.0
* Datasets 2.18.0
* Tokenizers 0.15.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0\n* Datasets 2.18.0\n* Tokenizers 0.15.1"
] |
sentence-similarity
|
sentence-transformers
|
# Yunika/muril-base-cased-sentence-transformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Yunika/muril-base-cased-sentence-transformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Yunika/muril-base-cased-sentence-transformer')
model = AutoModel.from_pretrained('Yunika/muril-base-cased-sentence-transformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Yunika/muril-base-cased-sentence-transformer)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3181 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 31,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["embedding-data/QQP_triplets"], "pipeline_tag": "sentence-similarity"}
|
Yunika/muril-base-cased-sentence-transformer
| null |
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:embedding-data/QQP_triplets",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:33:59+00:00
|
[] |
[] |
TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-embedding-data/QQP_triplets #endpoints_compatible #region-us
|
# Yunika/muril-base-cased-sentence-transformer
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 3181 with parameters:
Loss:
'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# Yunika/muril-base-cased-sentence-transformer\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 3181 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-embedding-data/QQP_triplets #endpoints_compatible #region-us \n",
"# Yunika/muril-base-cased-sentence-transformer\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 3181 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/upstage/llama-65b-instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama-65b-instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 14.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 15.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 20.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 24.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 24.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 26.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 28.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 28.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 31.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 34.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 34.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 37.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 37.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 39.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 45.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 46.3 | |
| [PART 1](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-65b-instruct-i1-GGUF/resolve/main/llama-65b-instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 53.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "tags": ["upstage", "llama", "instruct", "instruction"], "base_model": "upstage/llama-65b-instruct", "quantized_by": "mradermacher"}
|
mradermacher/llama-65b-instruct-i1-GGUF
| null |
[
"transformers",
"gguf",
"upstage",
"llama",
"instruct",
"instruction",
"en",
"base_model:upstage/llama-65b-instruct",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:34:10+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #upstage #llama #instruct #instruction #en #base_model-upstage/llama-65b-instruct #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #upstage #llama #instruct #instruction #en #base_model-upstage/llama-65b-instruct #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Scrunch7596/invoices-donut-model-v1
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:37:53+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|

## VAGO solutions SauerkrautLM-Qwen-32b
Introducing **SauerkrautLM-Qwen-32b** – our Sauerkraut version of the powerful [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)!
The model **SauerkrautLM-Qwen-32b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
- Finetuned with **SFT**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Qwen-32b](#all-SauerkrautLM-Qwen-32b)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Qwen-32b
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Qwen-32b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Qwen-32b) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Qwen-32b**
- **Model Type:** SauerkrautLM-Qwen-32b is a finetuned Model based on [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)
- **Language(s):** German, English
- **License:** [tongyi-qianwen-research](https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSEs)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model for 2 epochs on 160k data samples with SFT.
- Afterwards we applied DPO for 1 epoch with 110k data.
- LaserRMT version coming soon
**We teached German language skills on this model.** As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
**English:**
```
<|im_start|>system
You are SauerkrautLM, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
**German:**
```
<|im_start|>system
Du bist SauerkrautLM, ein hilfreicher und freundlicher KI-Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Example output of german language:
```
Die Karte zeigte eine verborgene Höhle, die in den umliegenden Bergen zu finden war. Mia war von der Idee fasziniert, diese Höhle zu entdecken und ihre Geheimnisse zu lüften. Sie nahm die Karte mit sich und machte sich auf den Weg, um die Höhle zu finden.
Die Wanderung zu den Bergen war eine Herausforderung, aber Mia war fest entschlossen, ihr Abenteuer zu vollenden. Sie überwand steinige Wege und überquerte klirrende Bäche, die ihre Füße kühlten und ihr die Energie für den Rest des Weges gab.
Endlich erreichte Mia die Höhle, die von einem dichten Wald umgeben war. Die Höhle war ein Ort der Geheimnisse und des Staunens, der ihr Herz höher schlagen ließ. Sie betrat die Höhle, und die Dunkelheit umhüllte sie wie ein Schleier aus Stille.
In der Höhle fand Mia eine alte Schatzkiste, die mit einem alten, verwitterten Holz verziert war. Mit zitternden Händen öffnete sie die Schatzkiste und fand darin eine alte, zerfledderte Schriftrolle. Die Schriftrolle war ein geheimnisvolles Artefakt, das ihr die Geschichte der Höhle offenbarte.
```
## Evaluation
**Open LLM Leaderboard:**
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **73.11** |
| ARC (25-shot) | 59.22 |
| HellaSwag (10-shot) | 82.32 |
| MMLU (5-shot) | 74.40|
| TruthfulQA (0-shot) | 61.03 |
| Winogrande (5-shot) | 82.16 |
| GSM8K (5-shot) | 79.53 |
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community
|
{"language": ["de", "en"], "license": "other", "tags": ["sft", "dpo"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE"}
|
blockblockblock/SauerkrautLM-Qwen-32b-bpw5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"sft",
"dpo",
"conversational",
"de",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null |
2024-04-15T13:39:20+00:00
|
[] |
[
"de",
"en"
] |
TAGS
#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
|
!SauerkrautLM
VAGO solutions SauerkrautLM-Qwen-32b
------------------------------------
Introducing SauerkrautLM-Qwen-32b – our Sauerkraut version of the powerful Qwen/Qwen1.5-32B!
The model SauerkrautLM-Qwen-32b is a joint effort between VAGO solutions and URL.
* Finetuned with SFT
* Aligned with DPO
Table of Contents
=================
1. Overview of all SauerkrautLM-Qwen-32b
2. Model Details
* Prompt template
* Training procedure
3. Evaluation
4. Disclaimer
5. Contact
6. Collaborations
7. Acknowledgement
All SauerkrautLM-Qwen-32b
-------------------------
Model Details
-------------
SauerkrautLM-Qwen-32b
* Model Type: SauerkrautLM-Qwen-32b is a finetuned Model based on Qwen/Qwen1.5-32B
* Language(s): German, English
* License: tongyi-qianwen-research
* Contact: VAGO solutions, URL
### Training procedure:
* We trained this model for 2 epochs on 160k data samples with SFT.
* Afterwards we applied DPO for 1 epoch with 110k data.
* LaserRMT version coming soon
We teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
English:
German:
### Example output of german language:
Evaluation
----------
Open LLM Leaderboard:
Disclaimer
----------
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
Contact
-------
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
Collaborations
--------------
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer
Acknowledgement
---------------
Many thanks to Qwen for providing such valuable model to the Open-Source community
|
[
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n",
"### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).",
"### Prompt Template:\n\n\nEnglish:\n\n\nGerman:",
"### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jayjay19630/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3-4x4-no_slippery", "type": "Taxi-v3-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
jayjay19630/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"Taxi-v3-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-15T13:39:26+00:00
|
[] |
[] |
TAGS
#Taxi-v3-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Citaman/command-r-34-layer](https://huggingface.co/Citaman/command-r-34-layer)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Citaman/command-r-34-layer
layer_range: [0, 33]
- model: Citaman/command-r-34-layer
layer_range: [1, 34]
merge_method: slerp
base_model: Citaman/command-r-34-layer
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-34-layer"]}
|
Citaman/command-r-33-layer
| null |
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Citaman/command-r-34-layer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:41:30+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-34-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Citaman/command-r-34-layer
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-34-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-34-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-34-layer",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/AlphaMonarch-7B #mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
# - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
# layer_range: [0, 32]
# - model: mlabonne/AlphaMonarch-7B
# layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0, 32]
#mlabonne/AlphaMonarch-7B
merge_method: slerp
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 #mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.8-mistral-7b-v02", "mlabonne/AlphaMonarch-7B"]}
|
tvkkishore/Inspire-7B-slerp
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:mlabonne/AlphaMonarch-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:41:31+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-mlabonne/AlphaMonarch-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* cognitivecomputations/dolphin-2.8-mistral-7b-v02
* mlabonne/AlphaMonarch-7B
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* mlabonne/AlphaMonarch-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-mlabonne/AlphaMonarch-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* mlabonne/AlphaMonarch-7B",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-amazon-en-es
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1275
- Rouge1: 90.2312
- Rouge2: 83.2787
- Rougel: 88.0196
- Rougelsum: 87.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.113 | 1.0 | 97 | 0.1067 | 90.4949 | 83.4088 | 87.98 | 87.9287 |
| 0.0856 | 2.0 | 194 | 0.1052 | 90.6604 | 83.7509 | 88.1407 | 88.0726 |
| 0.0723 | 3.0 | 291 | 0.1060 | 91.4193 | 84.9487 | 88.9628 | 88.8729 |
| 0.064 | 4.0 | 388 | 0.1119 | 89.7878 | 83.0958 | 87.321 | 87.2759 |
| 0.0556 | 5.0 | 485 | 0.1156 | 90.5422 | 83.8358 | 88.4229 | 88.3887 |
| 0.0515 | 6.0 | 582 | 0.1126 | 90.4997 | 83.4321 | 88.1359 | 88.1405 |
| 0.0456 | 7.0 | 679 | 0.1158 | 90.5983 | 83.8471 | 88.5468 | 88.4302 |
| 0.0468 | 8.0 | 776 | 0.1189 | 90.3242 | 83.5413 | 88.2592 | 88.2061 |
| 0.0416 | 9.0 | 873 | 0.1225 | 90.2886 | 83.1885 | 88.0928 | 88.0366 |
| 0.0385 | 10.0 | 970 | 0.1252 | 89.8331 | 82.8606 | 87.3511 | 87.335 |
| 0.0377 | 11.0 | 1067 | 0.1269 | 89.9057 | 83.057 | 87.6798 | 87.6802 |
| 0.0368 | 12.0 | 1164 | 0.1275 | 90.2312 | 83.2787 | 88.0196 | 87.9916 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "base_model": "google-t5/t5-base", "model-index": [{"name": "t5-base-finetuned-amazon-en-es", "results": []}]}
|
JohnDoe70/t5-base-finetuned-amazon-en-es
| null |
[
"transformers",
"tensorboard",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:45:55+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #onnx #safetensors #t5 #text2text-generation #summarization #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-base-finetuned-amazon-en-es
==============================
This model is a fine-tuned version of google-t5/t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1275
* Rouge1: 90.2312
* Rouge2: 83.2787
* Rougel: 88.0196
* Rougelsum: 87.9916
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #onnx #safetensors #t5 #text2text-generation #summarization #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Information
VolareQuantized is a compact iteration of the model [Volare](https://huggingface.co/MoxoffSpA/Volare), optimized for efficiency.
It is offered in two distinct configurations: a 4-bit version and an 8-bit version, each designed to maintain the model's effectiveness while significantly reducing its size
and computational requirements.
- It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
- it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
- It is quantized in a 4-bit version and an 8-bit version following the procedure [here](https://github.com/ggerganov/llama.cpp).
# Evaluation
We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard)
| hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | F1 |
|:----------------------| :--------------- | :-------------------- | :------- | :-- |
| 0.6474 | 0.4671 | 0.5521 | 0.555 | 69.82 |
## Usage
You need to download the .gguf model first
If you want to use the cpu install these dependencies:
```python
pip install llama-cpp-python huggingface_hub
```
If you want to use the gpu instead:
```python
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install huggingface_hub llama-cpp-python --force-reinstall --upgrade --no-cache-dir
```
And then use this code to see a response to the prompt.
```python
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
model_path = hf_hub_download(
repo_id="MoxoffSpA/VolareQuantized",
filename="Volare-ggml-Q4_K_M.gguf"
)
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path=model_path,
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=0 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
question = """Quanto è alta la torre di Pisa?"""
context = """
La Torre di Pisa è un campanile del XII secolo, famoso per la sua inclinazione. Alta circa 56 metri.
"""
prompt = f"Domanda: {question}, contesto: {context}"
output = llm(
f"[INST] {prompt} [/INST]", # Prompt
max_tokens=128,
stop=["\n"],
echo=True,
temperature=0.1,
top_p=0.95
)
# Chat Completion API
print(output['choices'][0]['text'])
```
## Bias, Risks and Limitations
VolareQuantized and its original model have not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
of the corpus was used to train the base model, however, it is likely to have included a mix of Web data and technical sources
like books and code.
## Links to resources
- SQUAD-it dataset: https://huggingface.co/datasets/squad_it
- Gemma-7b model: https://huggingface.co/google/gemma-7b
- Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
## Quantized versions
We have the not quantized version here:
https://huggingface.co/MoxoffSpA/Volare
## The Moxoff Team
Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta
|
{"language": ["it", "en"], "license": "mit", "library_name": "transformers", "tags": ["sft", "it", "gemma", "chatml"]}
|
MoxoffSpA/VolareQuantized
| null |
[
"transformers",
"gguf",
"gemma",
"text-generation",
"sft",
"it",
"chatml",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:48:16+00:00
|
[] |
[
"it",
"en"
] |
TAGS
#transformers #gguf #gemma #text-generation #sft #it #chatml #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Model Information
=================
VolareQuantized is a compact iteration of the model Volare, optimized for efficiency.
It is offered in two distinct configurations: a 4-bit version and an 8-bit version, each designed to maintain the model's effectiveness while significantly reducing its size
and computational requirements.
* It's trained both on publicly available datasets, like SQUAD-it, and datasets we've created in-house.
* it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
* It is quantized in a 4-bit version and an 8-bit version following the procedure here.
Evaluation
==========
We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
Usage
-----
You need to download the .gguf model first
If you want to use the cpu install these dependencies:
If you want to use the gpu instead:
And then use this code to see a response to the prompt.
Bias, Risks and Limitations
---------------------------
VolareQuantized and its original model have not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
of the corpus was used to train the base model, however, it is likely to have included a mix of Web data and technical sources
like books and code.
Links to resources
------------------
* SQUAD-it dataset: URL
* Gemma-7b model: URL
* Open Ita LLM Leaderbord: URL
Quantized versions
------------------
We have the not quantized version here:
URL
The Moxoff Team
---------------
Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta
|
[] |
[
"TAGS\n#transformers #gguf #gemma #text-generation #sft #it #chatml #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.10 +/- 18.61", "name": "mean_reward", "verified": false}]}]}]}
|
degra02/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T13:52:29+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
redmojo7/gemma-2b-it-finetune-palo-alto-network-auto-20
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:53:06+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
nielsr/vitpose_base
| null |
[
"transformers",
"safetensors",
"vitpose",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T13:53:14+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #vitpose #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #vitpose #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Information
Volare is an updated version of [Gemma7B](https://huggingface.co/google/gemma-7b), specifically fine-tuned with SFT and LoRA adjustments.
- It's trained on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
- it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
# Evaluation
We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard)
| hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | F1 |
|:----------------------| :--------------- | :-------------------- | :------- | :-- |
| 0.6474 | 0.4671 | 0.5521 | 0.555 | 69.82 |
## Usage
Be sure to install these dependencies before running the program
```python
!pip install transformers torch sentencepiece
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cpu" # if you want to use the gpu make sure to have cuda toolkit installed and change this to "cuda"
model = AutoModelForCausalLM.from_pretrained("MoxoffSpA/Volare")
tokenizer = AutoTokenizer.from_pretrained("MoxoffSpA/Volare")
question = """Quanto è alta la torre di Pisa?"""
context = """
La Torre di Pisa è un campanile del XII secolo, famoso per la sua inclinazione. Alta circa 56 metri.
"""
prompt = f"Domanda: {question}, contesto: {context}"
messages = [
{"role": "user", "content": prompt}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(
model_inputs, # The input to the model
max_new_tokens=128, # Limiting the maximum number of new tokens generated
do_sample=True, # Enabling sampling to introduce randomness in the generation
temperature=0.1, # Setting temperature to control the randomness, lower values make it more deterministic
top_p=0.95, # Using nucleus sampling with top-p filtering for more coherent generation
eos_token_id=tokenizer.eos_token_id # Specifying the token that indicates the end of a sequence
)
decoded_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
trimmed_output = decoded_output.strip()
print(trimmed_output)
```
## Bias, Risks and Limitations
Volare has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
of the corpus was used to train the base model, however it is likely to have included a mix of Web data and technical sources
like books and code.
## Links to resources
- SQUAD-it dataset: https://huggingface.co/datasets/squad_it
- Gemma-7b model: https://huggingface.co/google/gemma-7b
- Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
## Quantized versions
We have published as well the 4 bit and 8 bit versions of this model:
https://huggingface.co/MoxoffSpA/VolareQuantized
## The Moxoff Team
Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta
|
{"language": ["it", "en"], "license": "mit", "library_name": "transformers", "tags": ["sft", "it", "gemma", "chatml"]}
|
MoxoffSpA/Volare
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"sft",
"it",
"chatml",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:54:16+00:00
|
[] |
[
"it",
"en"
] |
TAGS
#transformers #safetensors #gemma #text-generation #sft #it #chatml #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Model Information
=================
Volare is an updated version of Gemma7B, specifically fine-tuned with SFT and LoRA adjustments.
* It's trained on publicly available datasets, like SQUAD-it, and datasets we've created in-house.
* it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
Evaluation
==========
We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
Usage
-----
Be sure to install these dependencies before running the program
Bias, Risks and Limitations
---------------------------
Volare has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
of the corpus was used to train the base model, however it is likely to have included a mix of Web data and technical sources
like books and code.
Links to resources
------------------
* SQUAD-it dataset: URL
* Gemma-7b model: URL
* Open Ita LLM Leaderbord: URL
Quantized versions
------------------
We have published as well the 4 bit and 8 bit versions of this model:
URL
The Moxoff Team
---------------
Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta
|
[] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #sft #it #chatml #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
0x0son0/sl104
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T13:55:00+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image
|
diffusers
|
# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-0
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
{"license": "creativeml-openrail-m", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "custom-diffusion"], "base_model": "CompVis/stable-diffusion-v1-4", "instance_prompt": "photo of a <new1> dog", "inference": true}
|
SidXXD/Attn_Maps-mist-mask-0
| null |
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-15T13:55:25+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #custom-diffusion #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-0
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using Custom Diffusion. You can find some example images in the following.
For more details on the training, please follow this link.
|
[
"# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-0\n\nThese are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using Custom Diffusion. You can find some example images in the following. \n\n\n\n\nFor more details on the training, please follow this link."
] |
[
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #custom-diffusion #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-0\n\nThese are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using Custom Diffusion. You can find some example images in the following. \n\n\n\n\nFor more details on the training, please follow this link."
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"library_name": "peft", "base_model": "stabilityai/stablelm-3b-4e1t"}
|
AY2324S2-CS4248-Team-47/StableLM-WI_Locness
| null |
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:stabilityai/stablelm-3b-4e1t",
"region:us"
] | null |
2024-04-15T13:57:06+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-stabilityai/stablelm-3b-4e1t #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-stabilityai/stablelm-3b-4e1t #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-to-image
|
diffusers
|
# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-1
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
{"license": "creativeml-openrail-m", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "custom-diffusion"], "base_model": "CompVis/stable-diffusion-v1-4", "instance_prompt": "photo of a <new1> dog", "inference": true}
|
SidXXD/Attn_Maps-mist-mask-1
| null |
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-15T13:57:14+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #custom-diffusion #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-1
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using Custom Diffusion. You can find some example images in the following.
For more details on the training, please follow this link.
|
[
"# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-1\n\nThese are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using Custom Diffusion. You can find some example images in the following. \n\n\n\n\nFor more details on the training, please follow this link."
] |
[
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #custom-diffusion #base_model-CompVis/stable-diffusion-v1-4 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Custom Diffusion - SidXXD/Attn_Maps-mist-mask-1\n\nThese are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using Custom Diffusion. You can find some example images in the following. \n\n\n\n\nFor more details on the training, please follow this link."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.