pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: arcee-ai/sec-mistral-7b-instruct-1.6-epoch
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0, 32]
merge_method: slerp
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.8-mistral-7b-v02", "arcee-ai/sec-mistral-7b-instruct-1.6-epoch"]}
|
mergekit-community/mergekit-slerp-hsdezod
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:12:18+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* cognitivecomputations/dolphin-2.8-mistral-7b-v02
* arcee-ai/sec-mistral-7b-instruct-1.6-epoch
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-arcee-ai/sec-mistral-7b-instruct-1.6-epoch #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* arcee-ai/sec-mistral-7b-instruct-1.6-epoch",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null |
peft
|
### Model Description
Utilizacion para crear chatbots de asistencia terapeutica, para poder tener conversaciones en una situacion de necesidad
- **Developed by:** Julio Fullaondo Canga
- **Language(s) (NLP):** Español
- **Finetuned from model [optional]:** gemma-2b-it
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "google/gemma-2b-it"}
|
Juliofc/chaterapi_model
| null |
[
"peft",
"safetensors",
"base_model:google/gemma-2b-it",
"region:us"
] | null |
2024-04-15T18:12:21+00:00
|
[] |
[] |
TAGS
#peft #safetensors #base_model-google/gemma-2b-it #region-us
|
### Model Description
Utilizacion para crear chatbots de asistencia terapeutica, para poder tener conversaciones en una situacion de necesidad
- Developed by: Julio Fullaondo Canga
- Language(s) (NLP): Español
- Finetuned from model [optional]: gemma-2b-it
- PEFT 0.10.0
|
[
"### Model Description\n\nUtilizacion para crear chatbots de asistencia terapeutica, para poder tener conversaciones en una situacion de necesidad\n\n\n\n- Developed by: Julio Fullaondo Canga\n- Language(s) (NLP): Español\n- Finetuned from model [optional]: gemma-2b-it\n- PEFT 0.10.0"
] |
[
"TAGS\n#peft #safetensors #base_model-google/gemma-2b-it #region-us \n",
"### Model Description\n\nUtilizacion para crear chatbots de asistencia terapeutica, para poder tener conversaciones en una situacion de necesidad\n\n\n\n- Developed by: Julio Fullaondo Canga\n- Language(s) (NLP): Español\n- Finetuned from model [optional]: gemma-2b-it\n- PEFT 0.10.0"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_4096_512_27M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8809
- F1 Score: 0.7163
- Accuracy: 0.7183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.9494 | 11.11 | 200 | 0.8131 | 0.5942 | 0.6355 |
| 0.7509 | 22.22 | 400 | 0.7273 | 0.6581 | 0.6725 |
| 0.6672 | 33.33 | 600 | 0.7117 | 0.6774 | 0.6817 |
| 0.6228 | 44.44 | 800 | 0.7048 | 0.6914 | 0.6931 |
| 0.5892 | 55.56 | 1000 | 0.7054 | 0.6919 | 0.6957 |
| 0.5622 | 66.67 | 1200 | 0.7037 | 0.6970 | 0.6997 |
| 0.5392 | 77.78 | 1400 | 0.7157 | 0.6961 | 0.7014 |
| 0.5177 | 88.89 | 1600 | 0.7197 | 0.7034 | 0.7069 |
| 0.4983 | 100.0 | 1800 | 0.7223 | 0.7009 | 0.7019 |
| 0.4802 | 111.11 | 2000 | 0.7373 | 0.7006 | 0.7019 |
| 0.4652 | 122.22 | 2200 | 0.7439 | 0.7070 | 0.7104 |
| 0.4502 | 133.33 | 2400 | 0.7619 | 0.7084 | 0.7133 |
| 0.4352 | 144.44 | 2600 | 0.7775 | 0.7066 | 0.7107 |
| 0.421 | 155.56 | 2800 | 0.7750 | 0.7107 | 0.7120 |
| 0.4068 | 166.67 | 3000 | 0.7968 | 0.7065 | 0.7093 |
| 0.3962 | 177.78 | 3200 | 0.7926 | 0.7110 | 0.7148 |
| 0.3836 | 188.89 | 3400 | 0.7969 | 0.7098 | 0.7111 |
| 0.3701 | 200.0 | 3600 | 0.8152 | 0.7097 | 0.7155 |
| 0.359 | 211.11 | 3800 | 0.8291 | 0.7060 | 0.7085 |
| 0.3477 | 222.22 | 4000 | 0.8425 | 0.7109 | 0.7146 |
| 0.3353 | 233.33 | 4200 | 0.8454 | 0.7108 | 0.7131 |
| 0.3284 | 244.44 | 4400 | 0.8596 | 0.7104 | 0.7139 |
| 0.3194 | 255.56 | 4600 | 0.8701 | 0.7091 | 0.7117 |
| 0.3101 | 266.67 | 4800 | 0.8964 | 0.7103 | 0.7150 |
| 0.3013 | 277.78 | 5000 | 0.8771 | 0.7096 | 0.7122 |
| 0.2943 | 288.89 | 5200 | 0.8922 | 0.7174 | 0.7196 |
| 0.2863 | 300.0 | 5400 | 0.9038 | 0.7136 | 0.7166 |
| 0.2801 | 311.11 | 5600 | 0.9100 | 0.7121 | 0.7164 |
| 0.2743 | 322.22 | 5800 | 0.9470 | 0.7026 | 0.7082 |
| 0.2685 | 333.33 | 6000 | 0.9195 | 0.7102 | 0.7133 |
| 0.2612 | 344.44 | 6200 | 0.9349 | 0.7072 | 0.7102 |
| 0.2579 | 355.56 | 6400 | 0.9303 | 0.7088 | 0.7120 |
| 0.2518 | 366.67 | 6600 | 0.9516 | 0.7021 | 0.7054 |
| 0.2466 | 377.78 | 6800 | 0.9504 | 0.7085 | 0.7120 |
| 0.2436 | 388.89 | 7000 | 0.9677 | 0.7081 | 0.7128 |
| 0.2401 | 400.0 | 7200 | 0.9589 | 0.7041 | 0.7065 |
| 0.2347 | 411.11 | 7400 | 0.9524 | 0.7053 | 0.7080 |
| 0.2327 | 422.22 | 7600 | 0.9812 | 0.7058 | 0.7091 |
| 0.2304 | 433.33 | 7800 | 0.9754 | 0.7091 | 0.7124 |
| 0.2249 | 444.44 | 8000 | 0.9908 | 0.7043 | 0.7074 |
| 0.2245 | 455.56 | 8200 | 0.9678 | 0.7092 | 0.7113 |
| 0.22 | 466.67 | 8400 | 0.9959 | 0.7052 | 0.7087 |
| 0.2192 | 477.78 | 8600 | 0.9869 | 0.7072 | 0.7102 |
| 0.218 | 488.89 | 8800 | 0.9895 | 0.7074 | 0.7109 |
| 0.2151 | 500.0 | 9000 | 0.9895 | 0.7077 | 0.7104 |
| 0.2149 | 511.11 | 9200 | 0.9940 | 0.7084 | 0.7113 |
| 0.2124 | 522.22 | 9400 | 0.9951 | 0.7085 | 0.7113 |
| 0.2121 | 533.33 | 9600 | 1.0009 | 0.7066 | 0.7102 |
| 0.2117 | 544.44 | 9800 | 0.9939 | 0.7057 | 0.7085 |
| 0.21 | 555.56 | 10000 | 0.9993 | 0.7066 | 0.7098 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_4096_512_27M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_4096_512_27M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_27M",
"region:us"
] | null |
2024-04-15T18:12:30+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
|
GUE\_splice\_reconstructed-seqsight\_4096\_512\_27M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8809
* F1 Score: 0.7163
* Accuracy: 0.7183
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
This model is created based on the instructions provided in https://www.datacamp.com/tutorial/fine-tuning-google-gemma. It is a PEFT adapter on Gemma-7B fine tuned on a character dataset dialogue.
## Model Details
### Model Description
Model can play role play interesting fictional, celebrity characters as they appear in the dataset
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Model type:** Causal LM
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model Gemma-7B-it:** [More Information Needed]
## Uses
Just a demo model example to try out in training PEFT models
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Training Details
### Training Data
Role Play training data hieunguyenminh/roleplay
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** T4 (2)
- **Hours used:** 4
- **Cloud Provider:** Kaggle
- **Compute Region:** North America
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
APaul1/gemma-7b-it-v2-role-play
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:13:20+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
This model is created based on the instructions provided in URL It is a PEFT adapter on Gemma-7B fine tuned on a character dataset dialogue.
## Model Details
### Model Description
Model can play role play interesting fictional, celebrity characters as they appear in the dataset
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Model type: Causal LM
- Language(s) (NLP):
- License:
- Finetuned from model Gemma-7B-it:
## Uses
Just a demo model example to try out in training PEFT models
### Direct Use
## Training Details
### Training Data
Role Play training data hieunguyenminh/roleplay
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: T4 (2)
- Hours used: 4
- Cloud Provider: Kaggle
- Compute Region: North America
## Technical Specifications [optional]
### Model Architecture and Objective
|
[
"# Model Card for Model ID\n\nThis model is created based on the instructions provided in URL It is a PEFT adapter on Gemma-7B fine tuned on a character dataset dialogue.",
"## Model Details",
"### Model Description\n\nModel can play role play interesting fictional, celebrity characters as they appear in the dataset\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Model type: Causal LM\n- Language(s) (NLP): \n- License: \n- Finetuned from model Gemma-7B-it:",
"## Uses\n\nJust a demo model example to try out in training PEFT models",
"### Direct Use",
"## Training Details",
"### Training Data\n\nRole Play training data hieunguyenminh/roleplay",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: T4 (2)\n- Hours used: 4\n- Cloud Provider: Kaggle\n- Compute Region: North America",
"## Technical Specifications [optional]",
"### Model Architecture and Objective"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID\n\nThis model is created based on the instructions provided in URL It is a PEFT adapter on Gemma-7B fine tuned on a character dataset dialogue.",
"## Model Details",
"### Model Description\n\nModel can play role play interesting fictional, celebrity characters as they appear in the dataset\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Model type: Causal LM\n- Language(s) (NLP): \n- License: \n- Finetuned from model Gemma-7B-it:",
"## Uses\n\nJust a demo model example to try out in training PEFT models",
"### Direct Use",
"## Training Details",
"### Training Data\n\nRole Play training data hieunguyenminh/roleplay",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: T4 (2)\n- Hours used: 4\n- Cloud Provider: Kaggle\n- Compute Region: North America",
"## Technical Specifications [optional]",
"### Model Architecture and Objective"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Grayx/sad_pepe_24
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:16:14+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_0-seqsight_4096_512_27M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5407
- F1 Score: 0.7387
- Accuracy: 0.741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6038 | 12.5 | 200 | 0.5794 | 0.7009 | 0.701 |
| 0.5265 | 25.0 | 400 | 0.5946 | 0.7061 | 0.706 |
| 0.492 | 37.5 | 600 | 0.5830 | 0.7115 | 0.713 |
| 0.4611 | 50.0 | 800 | 0.5983 | 0.7159 | 0.718 |
| 0.4378 | 62.5 | 1000 | 0.6050 | 0.7033 | 0.704 |
| 0.4185 | 75.0 | 1200 | 0.6200 | 0.7054 | 0.707 |
| 0.3993 | 87.5 | 1400 | 0.6218 | 0.7065 | 0.708 |
| 0.3814 | 100.0 | 1600 | 0.6444 | 0.7087 | 0.709 |
| 0.369 | 112.5 | 1800 | 0.6365 | 0.7029 | 0.704 |
| 0.3525 | 125.0 | 2000 | 0.6528 | 0.7071 | 0.709 |
| 0.3389 | 137.5 | 2200 | 0.6582 | 0.6980 | 0.699 |
| 0.3264 | 150.0 | 2400 | 0.7220 | 0.7019 | 0.703 |
| 0.3113 | 162.5 | 2600 | 0.7091 | 0.7046 | 0.705 |
| 0.3007 | 175.0 | 2800 | 0.7062 | 0.6971 | 0.697 |
| 0.2901 | 187.5 | 3000 | 0.7475 | 0.7012 | 0.702 |
| 0.2788 | 200.0 | 3200 | 0.7666 | 0.7078 | 0.708 |
| 0.2701 | 212.5 | 3400 | 0.7788 | 0.6926 | 0.693 |
| 0.2599 | 225.0 | 3600 | 0.7830 | 0.7028 | 0.703 |
| 0.2504 | 237.5 | 3800 | 0.8649 | 0.6977 | 0.698 |
| 0.2426 | 250.0 | 4000 | 0.8136 | 0.7025 | 0.703 |
| 0.2345 | 262.5 | 4200 | 0.8935 | 0.6959 | 0.696 |
| 0.2279 | 275.0 | 4400 | 0.8538 | 0.7118 | 0.712 |
| 0.2207 | 287.5 | 4600 | 0.8795 | 0.7049 | 0.705 |
| 0.2147 | 300.0 | 4800 | 0.8765 | 0.7055 | 0.706 |
| 0.2092 | 312.5 | 5000 | 0.9203 | 0.7055 | 0.706 |
| 0.2027 | 325.0 | 5200 | 0.9245 | 0.6970 | 0.698 |
| 0.1979 | 337.5 | 5400 | 0.9143 | 0.7073 | 0.708 |
| 0.1924 | 350.0 | 5600 | 0.9271 | 0.7038 | 0.704 |
| 0.1873 | 362.5 | 5800 | 0.9698 | 0.7017 | 0.702 |
| 0.1825 | 375.0 | 6000 | 0.9410 | 0.7069 | 0.707 |
| 0.1793 | 387.5 | 6200 | 0.9759 | 0.7077 | 0.708 |
| 0.1766 | 400.0 | 6400 | 0.9790 | 0.7008 | 0.701 |
| 0.1712 | 412.5 | 6600 | 0.9751 | 0.7079 | 0.708 |
| 0.1684 | 425.0 | 6800 | 0.9931 | 0.7020 | 0.702 |
| 0.1651 | 437.5 | 7000 | 1.0042 | 0.7000 | 0.7 |
| 0.1625 | 450.0 | 7200 | 1.0018 | 0.7058 | 0.706 |
| 0.1582 | 462.5 | 7400 | 1.0176 | 0.7069 | 0.707 |
| 0.1553 | 475.0 | 7600 | 1.0117 | 0.7009 | 0.701 |
| 0.1544 | 487.5 | 7800 | 1.0138 | 0.7059 | 0.706 |
| 0.1522 | 500.0 | 8000 | 1.0197 | 0.7016 | 0.702 |
| 0.1501 | 512.5 | 8200 | 1.0125 | 0.7070 | 0.707 |
| 0.1476 | 525.0 | 8400 | 1.0364 | 0.6990 | 0.699 |
| 0.1459 | 537.5 | 8600 | 1.0589 | 0.7050 | 0.705 |
| 0.1443 | 550.0 | 8800 | 1.0520 | 0.6990 | 0.699 |
| 0.1436 | 562.5 | 9000 | 1.0480 | 0.7040 | 0.704 |
| 0.1411 | 575.0 | 9200 | 1.0406 | 0.7029 | 0.703 |
| 0.1418 | 587.5 | 9400 | 1.0420 | 0.6970 | 0.697 |
| 0.1405 | 600.0 | 9600 | 1.0441 | 0.7 | 0.7 |
| 0.1401 | 612.5 | 9800 | 1.0434 | 0.7010 | 0.701 |
| 0.1395 | 625.0 | 10000 | 1.0453 | 0.7020 | 0.702 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_tf_0-seqsight_4096_512_27M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_0-seqsight_4096_512_27M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_27M",
"region:us"
] | null |
2024-04-15T18:16:41+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
|
GUE\_tf\_0-seqsight\_4096\_512\_27M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5407
* F1 Score: 0.7387
* Accuracy: 0.741
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Lugaborg/Valvalis
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:17:00+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Lugaborg/Rubicant
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:18:07+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
hiraltalsaniya/02_medical-llama2-7b-fine-tune
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:18:45+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0415MA2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2555 | 0.09 | 10 | 1.0684 |
| 0.4718 | 0.18 | 20 | 0.1179 |
| 0.1156 | 0.27 | 30 | 0.0892 |
| 0.0957 | 0.36 | 40 | 0.0782 |
| 0.0806 | 0.45 | 50 | 0.0723 |
| 0.0828 | 0.54 | 60 | 0.0704 |
| 0.0745 | 0.63 | 70 | 0.0687 |
| 0.0737 | 0.73 | 80 | 0.0682 |
| 0.0753 | 0.82 | 90 | 0.0633 |
| 0.0729 | 0.91 | 100 | 0.0590 |
| 0.0679 | 1.0 | 110 | 0.0632 |
| 0.057 | 1.09 | 120 | 0.0626 |
| 0.0612 | 1.18 | 130 | 0.0616 |
| 0.0559 | 1.27 | 140 | 0.0655 |
| 0.0509 | 1.36 | 150 | 0.0605 |
| 0.0591 | 1.45 | 160 | 0.0594 |
| 0.0563 | 1.54 | 170 | 0.0590 |
| 0.0543 | 1.63 | 180 | 0.0561 |
| 0.0503 | 1.72 | 190 | 0.0592 |
| 0.0593 | 1.81 | 200 | 0.0565 |
| 0.048 | 1.9 | 210 | 0.0579 |
| 0.047 | 1.99 | 220 | 0.0633 |
| 0.0361 | 2.08 | 230 | 0.0606 |
| 0.0366 | 2.18 | 240 | 0.0635 |
| 0.0314 | 2.27 | 250 | 0.0656 |
| 0.031 | 2.36 | 260 | 0.0672 |
| 0.0348 | 2.45 | 270 | 0.0679 |
| 0.0317 | 2.54 | 280 | 0.0671 |
| 0.0299 | 2.63 | 290 | 0.0665 |
| 0.0361 | 2.72 | 300 | 0.0655 |
| 0.0351 | 2.81 | 310 | 0.0651 |
| 0.0334 | 2.9 | 320 | 0.0649 |
| 0.0371 | 2.99 | 330 | 0.0650 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0415MA2", "results": []}]}
|
Litzy619/V0415MA2
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"license:mit",
"region:us"
] | null |
2024-04-15T18:20:07+00:00
|
[] |
[] |
TAGS
#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
|
V0415MA2
========
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0650
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 60
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.0.dev0
* Pytorch 2.1.2+cu121
* Datasets 2.14.6
* Tokenizers 0.14.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
[
"TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | null |
## WizardLM-2-8x22B-GGUF Quants
Readme to be updated as addtional quants are uploaded.
Q4_K - ~80GB
|
{"license": "apache-2.0"}
|
praxeswolf0d/WizardLM-2-8x22B-GGUF
| null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T18:20:10+00:00
|
[] |
[] |
TAGS
#gguf #license-apache-2.0 #region-us
|
## WizardLM-2-8x22B-GGUF Quants
Readme to be updated as addtional quants are uploaded.
Q4_K - ~80GB
|
[
"## WizardLM-2-8x22B-GGUF Quants \n\nReadme to be updated as addtional quants are uploaded. \n\nQ4_K - ~80GB"
] |
[
"TAGS\n#gguf #license-apache-2.0 #region-us \n",
"## WizardLM-2-8x22B-GGUF Quants \n\nReadme to be updated as addtional quants are uploaded. \n\nQ4_K - ~80GB"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Reihaneh/wav2vec2_germanic_common_voice_8
| null |
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:20:12+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ryan04152024_ALLDATA
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the properties dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1193
- Ordinal Mae: 0.3505
- Ordinal Accuracy: 0.7757
- Na Accuracy: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.02
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "ryan04152024_ALLDATA", "results": []}]}
|
rshrott/ryan04152024_ALLDATA
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:20:49+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# ryan04152024_ALLDATA
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the properties dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1193
- Ordinal Mae: 0.3505
- Ordinal Accuracy: 0.7757
- Na Accuracy: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.02
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ryan04152024_ALLDATA\n\nThis model is a fine-tuned version of google/vit-base-patch16-224-in21k on the properties dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1193\n- Ordinal Mae: 0.3505\n- Ordinal Accuracy: 0.7757\n- Na Accuracy: 0.9411",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.02\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ryan04152024_ALLDATA\n\nThis model is a fine-tuned version of google/vit-base-patch16-224-in21k on the properties dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1193\n- Ordinal Mae: 0.3505\n- Ordinal Accuracy: 0.7757\n- Na Accuracy: 0.9411",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.02\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
mllm-dev/gpt2_untrained
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:20:49+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gpt2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gpt2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
adhi29/model_robertabase_1024_token_classification
| null |
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:24:04+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #roberta #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #roberta #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: Nexusflow/Starling-LM-7B-beta
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
- model: microsoft/WizardLM-2-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["cognitivecomputations/dolphin-2.8-mistral-7b-v02", "mistralai/Mistral-7B-v0.1", "microsoft/WizardLM-2-7B", "Nexusflow/Starling-LM-7B-beta"]}
|
Kukedlc/NeuralSoTa-7b-v0.1
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:microsoft/WizardLM-2-7B",
"base_model:Nexusflow/Starling-LM-7B-beta",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:24:14+00:00
|
[
"2403.19522"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-mistralai/Mistral-7B-v0.1 #base_model-microsoft/WizardLM-2-7B #base_model-Nexusflow/Starling-LM-7B-beta #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using mistralai/Mistral-7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* cognitivecomputations/dolphin-2.8-mistral-7b-v02
* microsoft/WizardLM-2-7B
* Nexusflow/Starling-LM-7B-beta
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* microsoft/WizardLM-2-7B\n* Nexusflow/Starling-LM-7B-beta",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-cognitivecomputations/dolphin-2.8-mistral-7b-v02 #base_model-mistralai/Mistral-7B-v0.1 #base_model-microsoft/WizardLM-2-7B #base_model-Nexusflow/Starling-LM-7B-beta #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using mistralai/Mistral-7B-v0.1 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* cognitivecomputations/dolphin-2.8-mistral-7b-v02\n* microsoft/WizardLM-2-7B\n* Nexusflow/Starling-LM-7B-beta",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Resi/donut-docvqa-sagemaker
| null |
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:26:43+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_4096_512_27M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6843
- F1 Score: 0.7645
- Accuracy: 0.765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6203 | 10.0 | 200 | 0.5971 | 0.6737 | 0.674 |
| 0.5436 | 20.0 | 400 | 0.5951 | 0.6948 | 0.695 |
| 0.5098 | 30.0 | 600 | 0.6054 | 0.6915 | 0.692 |
| 0.4807 | 40.0 | 800 | 0.6313 | 0.7070 | 0.707 |
| 0.4582 | 50.0 | 1000 | 0.6245 | 0.6960 | 0.696 |
| 0.439 | 60.0 | 1200 | 0.6293 | 0.7059 | 0.706 |
| 0.4227 | 70.0 | 1400 | 0.6354 | 0.6968 | 0.697 |
| 0.4098 | 80.0 | 1600 | 0.6395 | 0.6919 | 0.692 |
| 0.3956 | 90.0 | 1800 | 0.6355 | 0.6976 | 0.698 |
| 0.3835 | 100.0 | 2000 | 0.6611 | 0.6930 | 0.693 |
| 0.3727 | 110.0 | 2200 | 0.6497 | 0.7038 | 0.704 |
| 0.3593 | 120.0 | 2400 | 0.6813 | 0.6910 | 0.692 |
| 0.3486 | 130.0 | 2600 | 0.6977 | 0.6907 | 0.691 |
| 0.3372 | 140.0 | 2800 | 0.7030 | 0.6990 | 0.699 |
| 0.326 | 150.0 | 3000 | 0.7623 | 0.698 | 0.698 |
| 0.3147 | 160.0 | 3200 | 0.7431 | 0.6976 | 0.698 |
| 0.304 | 170.0 | 3400 | 0.7804 | 0.6987 | 0.699 |
| 0.2945 | 180.0 | 3600 | 0.7532 | 0.6918 | 0.692 |
| 0.2835 | 190.0 | 3800 | 0.7791 | 0.6919 | 0.692 |
| 0.2751 | 200.0 | 4000 | 0.8411 | 0.7017 | 0.702 |
| 0.2679 | 210.0 | 4200 | 0.7976 | 0.6990 | 0.699 |
| 0.2579 | 220.0 | 4400 | 0.8403 | 0.6889 | 0.689 |
| 0.2522 | 230.0 | 4600 | 0.8738 | 0.6950 | 0.695 |
| 0.245 | 240.0 | 4800 | 0.8509 | 0.6949 | 0.695 |
| 0.2366 | 250.0 | 5000 | 0.8555 | 0.6909 | 0.691 |
| 0.2321 | 260.0 | 5200 | 0.8754 | 0.7010 | 0.701 |
| 0.2241 | 270.0 | 5400 | 0.8817 | 0.6879 | 0.688 |
| 0.2205 | 280.0 | 5600 | 0.9012 | 0.6948 | 0.695 |
| 0.2131 | 290.0 | 5800 | 0.8698 | 0.6890 | 0.689 |
| 0.2087 | 300.0 | 6000 | 0.9154 | 0.6880 | 0.688 |
| 0.2032 | 310.0 | 6200 | 0.9253 | 0.6997 | 0.7 |
| 0.1993 | 320.0 | 6400 | 0.9378 | 0.7070 | 0.707 |
| 0.1963 | 330.0 | 6600 | 0.9213 | 0.6950 | 0.695 |
| 0.1906 | 340.0 | 6800 | 0.9641 | 0.6900 | 0.69 |
| 0.1877 | 350.0 | 7000 | 0.9687 | 0.6970 | 0.697 |
| 0.1845 | 360.0 | 7200 | 1.0025 | 0.6918 | 0.692 |
| 0.1816 | 370.0 | 7400 | 0.9776 | 0.6920 | 0.692 |
| 0.1785 | 380.0 | 7600 | 0.9934 | 0.6820 | 0.682 |
| 0.175 | 390.0 | 7800 | 0.9954 | 0.6880 | 0.688 |
| 0.174 | 400.0 | 8000 | 0.9954 | 0.698 | 0.698 |
| 0.1703 | 410.0 | 8200 | 0.9984 | 0.6960 | 0.696 |
| 0.1682 | 420.0 | 8400 | 1.0011 | 0.6930 | 0.693 |
| 0.1654 | 430.0 | 8600 | 1.0182 | 0.6860 | 0.686 |
| 0.1632 | 440.0 | 8800 | 1.0312 | 0.6898 | 0.69 |
| 0.1621 | 450.0 | 9000 | 1.0164 | 0.6900 | 0.69 |
| 0.1618 | 460.0 | 9200 | 1.0313 | 0.6880 | 0.688 |
| 0.1618 | 470.0 | 9400 | 1.0189 | 0.6879 | 0.688 |
| 0.1595 | 480.0 | 9600 | 1.0316 | 0.6870 | 0.687 |
| 0.1589 | 490.0 | 9800 | 1.0294 | 0.6870 | 0.687 |
| 0.1587 | 500.0 | 10000 | 1.0249 | 0.6899 | 0.69 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_tf_1-seqsight_4096_512_27M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_1-seqsight_4096_512_27M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_27M",
"region:us"
] | null |
2024-04-15T18:27:19+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
|
GUE\_tf\_1-seqsight\_4096\_512\_27M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6843
* F1 Score: 0.7645
* Accuracy: 0.765
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
HenryCai1129/LlamaAdapter-llama2-happy-100-lora
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:28:56+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-classification
|
transformers
|
# Cross-Encoder for Sentence Similarity
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on 6 different nli datasets. The model will predict a score between 0 (not similar) and 1 (very similar) for the semantic similarity of two sentences.
## Usage (CrossEncoder)
Comparing each sentence of sentences1 array to the corrosponding sentence of sentences2 array like comparing the first sentnece of each array, then comparing the second sentence of each array,...
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('abbasgolestani/ag-nli-DeTS-sentence-similarity-v3-light')
# Two lists of sentences
sentences1 = ['I am honored to be given the opportunity to help make our company better',
'I love my job and what I do here',
'I am excited about our company’s vision']
sentences2 = ['I am hopeful about the future of our company',
'My work is aligning with my passion',
'Definitely our company vision will be the next breakthrough to change the world and I’m so happy and proud to work here']
pairs = zip(sentences1,sentences2)
list_pairs=list(pairs)
scores1 = model.predict(list_pairs, show_progress_bar=False)
print(scores1)
for i in range(len(sentences1)):
print("{} \t\t {} \t\t Score: {:.4f}".format(sentences1[i], sentences2[i], scores1[i]))
```
## Usage #2
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('abbasgolestani/ag-nli-DeTS-sentence-similarity-v3-light')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
|
{"language": ["en", "nl", "de", "fr", "it", "es"], "license": "apache-2.0", "tags": ["feature-extraction", "sentence-similarity", "transformers"], "datasets": ["multi_nli", "pietrolesci/nli_fever"], "pipeline_tag": "text-classification"}
|
abbasgolestani/ag-nli-DeTS-sentence-similarity-v3-light
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"feature-extraction",
"sentence-similarity",
"en",
"nl",
"de",
"fr",
"it",
"es",
"dataset:multi_nli",
"dataset:pietrolesci/nli_fever",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:29:08+00:00
|
[] |
[
"en",
"nl",
"de",
"fr",
"it",
"es"
] |
TAGS
#transformers #pytorch #electra #text-classification #feature-extraction #sentence-similarity #en #nl #de #fr #it #es #dataset-multi_nli #dataset-pietrolesci/nli_fever #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Cross-Encoder for Sentence Similarity
This model was trained using SentenceTransformers Cross-Encoder class.
## Training Data
This model was trained on 6 different nli datasets. The model will predict a score between 0 (not similar) and 1 (very similar) for the semantic similarity of two sentences.
## Usage (CrossEncoder)
Comparing each sentence of sentences1 array to the corrosponding sentence of sentences2 array like comparing the first sentnece of each array, then comparing the second sentence of each array,...
## Usage #2
Pre-trained models can be used like this:
The model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.
You can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class
|
[
"# Cross-Encoder for Sentence Similarity\nThis model was trained using SentenceTransformers Cross-Encoder class.",
"## Training Data\nThis model was trained on 6 different nli datasets. The model will predict a score between 0 (not similar) and 1 (very similar) for the semantic similarity of two sentences.",
"## Usage (CrossEncoder)\nComparing each sentence of sentences1 array to the corrosponding sentence of sentences2 array like comparing the first sentnece of each array, then comparing the second sentence of each array,...",
"## Usage #2\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class"
] |
[
"TAGS\n#transformers #pytorch #electra #text-classification #feature-extraction #sentence-similarity #en #nl #de #fr #it #es #dataset-multi_nli #dataset-pietrolesci/nli_fever #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Cross-Encoder for Sentence Similarity\nThis model was trained using SentenceTransformers Cross-Encoder class.",
"## Training Data\nThis model was trained on 6 different nli datasets. The model will predict a score between 0 (not similar) and 1 (very similar) for the semantic similarity of two sentences.",
"## Usage (CrossEncoder)\nComparing each sentence of sentences1 array to the corrosponding sentence of sentences2 array like comparing the first sentnece of each array, then comparing the second sentence of each array,...",
"## Usage #2\n\nPre-trained models can be used like this:\n\n\nThe model will predict scores for the pairs '('Sentence 1', 'Sentence 2')' and '('Sentence 3', 'Sentence 4')'.\n\nYou can use this model also without sentence_transformers and by just using Transformers ''AutoModel'' class"
] |
text-generation
|
transformers
|
for test unsloth finetune process and Inference API
**this model overfit with train data so it cannot answer anything not in han dataset**
## prompt
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
จงแต่งบทกวีเกี่ยวกับสายฝนที่ผ่านมา
### Response:
```
|
{"language": ["th"], "datasets": ["pythainlp/han-instruct-dataset-v2.0"], "base_model": "unsloth/gemma-2b", "pipeline_tag": "text-generation", "widget": [{"text": "\u0e08\u0e07\u0e41\u0e15\u0e48\u0e07\u0e1a\u0e17\u0e01\u0e27\u0e35\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e2a\u0e32\u0e22\u0e1d\u0e19\u0e17\u0e35\u0e48\u0e1c\u0e48\u0e32\u0e19\u0e21\u0e32", "example_title": "\u0e41\u0e15\u0e48\u0e07\u0e1a\u0e17\u0e01\u0e27\u0e35"}, {"text": "\u0e08\u0e07\u0e40\u0e02\u0e35\u0e22\u0e19\u0e42\u0e1b\u0e23\u0e41\u0e01\u0e23\u0e21\u0e20\u0e32\u0e29\u0e32 python \u0e41\u0e2a\u0e14\u0e07\u0e41\u0e21\u0e48\u0e2a\u0e39\u0e15\u0e23\u0e04\u0e39\u0e13\u0e41\u0e21\u0e48 12 \u0e43\u0e2b\u0e49\u0e2b\u0e19\u0e48\u0e2d\u0e22", "example_title": "\u0e40\u0e02\u0e35\u0e22\u0e19\u0e42\u0e1b\u0e23\u0e41\u0e01\u0e23\u0e21"}, {"text": "\u0e04\u0e34\u0e14\u0e40\u0e23\u0e37\u0e48\u0e2d\u0e07\u0e2a\u0e31\u0e49\u0e19\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e41\u0e21\u0e27\u0e43\u0e2b\u0e49\u0e2b\u0e19\u0e48\u0e2d\u0e22", "example_title": "\u0e04\u0e34\u0e14\u0e40\u0e23\u0e37\u0e48\u0e2d\u0e07\u0e2a\u0e31\u0e49\u0e19"}, {"text": "\u0e2a\u0e23\u0e49\u0e32\u0e07\u0e1b\u0e23\u0e30\u0e42\u0e22\u0e04\u0e42\u0e06\u0e29\u0e13\u0e32\u0e17\u0e35\u0e48\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e02\u0e49\u0e2d\u0e07\u0e01\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e19\u0e2d\u0e19\u0e43\u0e2b\u0e49\u0e2b\u0e19\u0e48\u0e2d\u0e22", "example_title": "\u0e2a\u0e23\u0e49\u0e32\u0e07\u0e42\u0e06\u0e29\u0e13\u0e32"}], "inference": {"parameters": {"temperature": 0.5}}}
|
ping98k/gemma-han-2b
| null |
[
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"conversational",
"th",
"dataset:pythainlp/han-instruct-dataset-v2.0",
"base_model:unsloth/gemma-2b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:31:25+00:00
|
[] |
[
"th"
] |
TAGS
#transformers #safetensors #gguf #gemma #text-generation #conversational #th #dataset-pythainlp/han-instruct-dataset-v2.0 #base_model-unsloth/gemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
for test unsloth finetune process and Inference API
this model overfit with train data so it cannot answer anything not in han dataset
## prompt
|
[
"## prompt"
] |
[
"TAGS\n#transformers #safetensors #gguf #gemma #text-generation #conversational #th #dataset-pythainlp/han-instruct-dataset-v2.0 #base_model-unsloth/gemma-2b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## prompt"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Aviral2412/fineturning2
| null |
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:32:12+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. This model weights will be available in the coming days.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
{"license": "apache-2.0"}
|
jncraton/WizardLM-2-7B-ct2-int8
| null |
[
"transformers",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:32:28+00:00
|
[
"2304.12244",
"2306.08568",
"2308.09583"
] |
[] |
TAGS
#transformers #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #endpoints_compatible #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. This model weights will be available in the coming days.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 7B
* Developed by: WizardLM@Microsoft AI
* Base model: mistralai/Mistral-7B-v0.1
* Parameters: 7B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
|
[
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. This model weights will be available in the coming days. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
[
"TAGS\n#transformers #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #endpoints_compatible #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. This model weights will be available in the coming days. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-mms-300m-ikk-2
This model is a fine-tuned version of [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3401
- Wer: 0.6359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4215 | 16.33 | 400 | 1.3401 | 0.6359 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "datasets": ["audiofolder"], "metrics": ["wer"], "base_model": "facebook/mms-300m", "model-index": [{"name": "wav2vec2-mms-300m-ikk-2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "audiofolder", "type": "audiofolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "wer", "value": 0.6359151455352738, "name": "Wer"}]}]}]}
|
ogbi/wav2vec2-mms-300m-ikk-2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/mms-300m",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:32:57+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-audiofolder #base_model-facebook/mms-300m #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-mms-300m-ikk-2
=======================
This model is a fine-tuned version of facebook/mms-300m on the audiofolder dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3401
* Wer: 0.6359
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-audiofolder #base_model-facebook/mms-300m #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_4096_512_27M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3329
- F1 Score: 0.6923
- Accuracy: 0.695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5893 | 20.0 | 200 | 0.5803 | 0.6992 | 0.7 |
| 0.4605 | 40.0 | 400 | 0.5852 | 0.7173 | 0.719 |
| 0.3874 | 60.0 | 600 | 0.5504 | 0.7645 | 0.766 |
| 0.3355 | 80.0 | 800 | 0.5582 | 0.7738 | 0.774 |
| 0.2993 | 100.0 | 1000 | 0.5894 | 0.7939 | 0.795 |
| 0.2698 | 120.0 | 1200 | 0.6359 | 0.7884 | 0.79 |
| 0.2448 | 140.0 | 1400 | 0.6170 | 0.7959 | 0.797 |
| 0.2261 | 160.0 | 1600 | 0.6599 | 0.7980 | 0.799 |
| 0.2073 | 180.0 | 1800 | 0.6513 | 0.8051 | 0.806 |
| 0.193 | 200.0 | 2000 | 0.7146 | 0.7964 | 0.798 |
| 0.1806 | 220.0 | 2200 | 0.6907 | 0.8081 | 0.809 |
| 0.1682 | 240.0 | 2400 | 0.7065 | 0.8093 | 0.81 |
| 0.1561 | 260.0 | 2600 | 0.6980 | 0.8123 | 0.813 |
| 0.1474 | 280.0 | 2800 | 0.6753 | 0.8072 | 0.808 |
| 0.1396 | 300.0 | 3000 | 0.7101 | 0.8124 | 0.813 |
| 0.1316 | 320.0 | 3200 | 0.7639 | 0.8015 | 0.803 |
| 0.1242 | 340.0 | 3400 | 0.7226 | 0.8153 | 0.816 |
| 0.1163 | 360.0 | 3600 | 0.7783 | 0.8048 | 0.806 |
| 0.113 | 380.0 | 3800 | 0.7560 | 0.8008 | 0.802 |
| 0.108 | 400.0 | 4000 | 0.7853 | 0.8133 | 0.814 |
| 0.1008 | 420.0 | 4200 | 0.7791 | 0.8071 | 0.808 |
| 0.098 | 440.0 | 4400 | 0.8229 | 0.8101 | 0.811 |
| 0.0933 | 460.0 | 4600 | 0.7589 | 0.8152 | 0.816 |
| 0.0901 | 480.0 | 4800 | 0.7803 | 0.8048 | 0.806 |
| 0.0849 | 500.0 | 5000 | 0.7706 | 0.8111 | 0.812 |
| 0.0818 | 520.0 | 5200 | 0.7711 | 0.8165 | 0.817 |
| 0.0789 | 540.0 | 5400 | 0.7938 | 0.8122 | 0.813 |
| 0.0761 | 560.0 | 5600 | 0.7861 | 0.8235 | 0.824 |
| 0.0731 | 580.0 | 5800 | 0.8139 | 0.8119 | 0.813 |
| 0.07 | 600.0 | 6000 | 0.8033 | 0.8194 | 0.82 |
| 0.0674 | 620.0 | 6200 | 0.8988 | 0.8023 | 0.804 |
| 0.0663 | 640.0 | 6400 | 0.8774 | 0.8141 | 0.815 |
| 0.0643 | 660.0 | 6600 | 0.8402 | 0.8122 | 0.813 |
| 0.0611 | 680.0 | 6800 | 0.8827 | 0.8153 | 0.816 |
| 0.0605 | 700.0 | 7000 | 0.8664 | 0.8058 | 0.807 |
| 0.059 | 720.0 | 7200 | 0.8272 | 0.8165 | 0.817 |
| 0.0573 | 740.0 | 7400 | 0.8359 | 0.8154 | 0.816 |
| 0.0557 | 760.0 | 7600 | 0.8723 | 0.8161 | 0.817 |
| 0.0554 | 780.0 | 7800 | 0.8625 | 0.8143 | 0.815 |
| 0.0535 | 800.0 | 8000 | 0.8531 | 0.8186 | 0.819 |
| 0.0536 | 820.0 | 8200 | 0.8659 | 0.8135 | 0.814 |
| 0.0519 | 840.0 | 8400 | 0.8749 | 0.8145 | 0.815 |
| 0.0506 | 860.0 | 8600 | 0.8891 | 0.8182 | 0.819 |
| 0.0504 | 880.0 | 8800 | 0.8822 | 0.8163 | 0.817 |
| 0.049 | 900.0 | 9000 | 0.9136 | 0.8172 | 0.818 |
| 0.0496 | 920.0 | 9200 | 0.8937 | 0.8193 | 0.82 |
| 0.0481 | 940.0 | 9400 | 0.8762 | 0.8165 | 0.817 |
| 0.0479 | 960.0 | 9600 | 0.8961 | 0.8113 | 0.812 |
| 0.0467 | 980.0 | 9800 | 0.9135 | 0.8154 | 0.816 |
| 0.0468 | 1000.0 | 10000 | 0.8967 | 0.8133 | 0.814 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_tf_4-seqsight_4096_512_27M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_4-seqsight_4096_512_27M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_27M",
"region:us"
] | null |
2024-04-15T18:35:20+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
|
GUE\_tf\_4-seqsight\_4096\_512\_27M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3329
* F1 Score: 0.6923
* Accuracy: 0.695
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_27M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6934
- F1 Score: 0.6100
- Accuracy: 0.611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6622 | 14.29 | 200 | 0.6280 | 0.6255 | 0.636 |
| 0.5954 | 28.57 | 400 | 0.6567 | 0.6140 | 0.62 |
| 0.5554 | 42.86 | 600 | 0.6839 | 0.6331 | 0.633 |
| 0.5229 | 57.14 | 800 | 0.6936 | 0.6441 | 0.644 |
| 0.497 | 71.43 | 1000 | 0.7280 | 0.6321 | 0.632 |
| 0.4776 | 85.71 | 1200 | 0.7605 | 0.6351 | 0.635 |
| 0.4631 | 100.0 | 1400 | 0.7473 | 0.6165 | 0.617 |
| 0.4463 | 114.29 | 1600 | 0.7566 | 0.6311 | 0.631 |
| 0.4297 | 128.57 | 1800 | 0.7849 | 0.6354 | 0.637 |
| 0.4164 | 142.86 | 2000 | 0.8287 | 0.6189 | 0.619 |
| 0.4007 | 157.14 | 2200 | 0.7808 | 0.6101 | 0.61 |
| 0.3873 | 171.43 | 2400 | 0.8039 | 0.6131 | 0.613 |
| 0.3704 | 185.71 | 2600 | 0.8592 | 0.6131 | 0.613 |
| 0.3586 | 200.0 | 2800 | 0.8205 | 0.6198 | 0.62 |
| 0.3443 | 214.29 | 3000 | 0.8332 | 0.6136 | 0.614 |
| 0.3322 | 228.57 | 3200 | 0.8730 | 0.6134 | 0.614 |
| 0.3216 | 242.86 | 3400 | 0.8971 | 0.6158 | 0.616 |
| 0.3098 | 257.14 | 3600 | 0.9128 | 0.6060 | 0.606 |
| 0.3 | 271.43 | 3800 | 0.9253 | 0.614 | 0.614 |
| 0.2864 | 285.71 | 4000 | 0.9561 | 0.6070 | 0.607 |
| 0.2793 | 300.0 | 4200 | 0.9541 | 0.6101 | 0.61 |
| 0.2688 | 314.29 | 4400 | 0.9699 | 0.6091 | 0.61 |
| 0.2613 | 328.57 | 4600 | 0.9740 | 0.6230 | 0.623 |
| 0.2525 | 342.86 | 4800 | 0.9873 | 0.6240 | 0.624 |
| 0.2446 | 357.14 | 5000 | 0.9957 | 0.6219 | 0.622 |
| 0.2376 | 371.43 | 5200 | 1.0356 | 0.6079 | 0.608 |
| 0.2304 | 385.71 | 5400 | 1.0537 | 0.6196 | 0.62 |
| 0.2246 | 400.0 | 5600 | 1.0538 | 0.6021 | 0.603 |
| 0.2175 | 414.29 | 5800 | 1.0885 | 0.6206 | 0.621 |
| 0.2115 | 428.57 | 6000 | 1.0782 | 0.6201 | 0.62 |
| 0.207 | 442.86 | 6200 | 1.0633 | 0.6159 | 0.616 |
| 0.2007 | 457.14 | 6400 | 1.0680 | 0.6168 | 0.617 |
| 0.1968 | 471.43 | 6600 | 1.0833 | 0.6209 | 0.621 |
| 0.1924 | 485.71 | 6800 | 1.1191 | 0.6096 | 0.611 |
| 0.1888 | 500.0 | 7000 | 1.1181 | 0.6157 | 0.616 |
| 0.186 | 514.29 | 7200 | 1.0924 | 0.6146 | 0.615 |
| 0.182 | 528.57 | 7400 | 1.1181 | 0.6153 | 0.616 |
| 0.1797 | 542.86 | 7600 | 1.1098 | 0.6208 | 0.621 |
| 0.1767 | 557.14 | 7800 | 1.1304 | 0.6117 | 0.613 |
| 0.1747 | 571.43 | 8000 | 1.1314 | 0.6186 | 0.619 |
| 0.1708 | 585.71 | 8200 | 1.1295 | 0.6247 | 0.625 |
| 0.1687 | 600.0 | 8400 | 1.1234 | 0.6291 | 0.629 |
| 0.1666 | 614.29 | 8600 | 1.1443 | 0.6224 | 0.623 |
| 0.1638 | 628.57 | 8800 | 1.1538 | 0.6207 | 0.621 |
| 0.1624 | 642.86 | 9000 | 1.1545 | 0.6270 | 0.627 |
| 0.1623 | 657.14 | 9200 | 1.1364 | 0.6209 | 0.621 |
| 0.1604 | 671.43 | 9400 | 1.1558 | 0.6309 | 0.631 |
| 0.16 | 685.71 | 9600 | 1.1534 | 0.6268 | 0.627 |
| 0.1588 | 700.0 | 9800 | 1.1571 | 0.6276 | 0.628 |
| 0.1576 | 714.29 | 10000 | 1.1574 | 0.6256 | 0.626 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_27M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_27M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_27M",
"region:us"
] | null |
2024-04-15T18:39:33+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
|
GUE\_tf\_3-seqsight\_4096\_512\_27M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6934
* F1 Score: 0.6100
* Accuracy: 0.611
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
Bio-REBEL
|
{"license": "apache-2.0"}
|
IvyW/rebel_for_bio
| null |
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:42:25+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #bart #text2text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Bio-REBEL
|
[] |
[
"TAGS\n#transformers #safetensors #bart #text2text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-to-image
|
diffusers
|
# armor-samurai
<Gallery />
## Model description
Creates renders of Samurai armor by adhicipta
## Trigger words
You should use `samurai` to trigger the image generation.
You should use `armor` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/armor-samurai/tree/main) them in the Files & versions tab.
|
{"tags": ["text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "widget": [{"text": "UNICODE\u0000\u00001\u0000w\u0000o\u0000m\u0000a\u0000n\u0000,\u0000 \u0000p\u0000o\u0000r\u0000t\u0000r\u0000a\u0000i\u0000t\u0000,\u0000 \u0000 \u0000a\u0000 \u0000b\u0000e\u0000a\u0000u\u0000t\u0000i\u0000f\u0000u\u0000l\u0000 \u0000g\u0000i\u0000r\u0000l\u0000 \u0000w\u0000e\u0000a\u0000r\u0000i\u0000n\u0000g\u0000 \u0000s\u0000a\u0000m\u0000u\u0000r\u0000a\u0000i\u0000 \u0000h\u0000e\u0000l\u0000m\u0000e\u0000t\u0000,\u0000 \u0000 \u0000s\u0000h\u0000o\u0000r\u0000t\u0000 \u0000h\u0000a\u0000i\u0000r\u0000,\u0000 \u0000l\u0000o\u0000o\u0000k\u0000i\u0000n\u0000g\u0000 \u0000a\u0000t\u0000 \u0000v\u0000i\u0000e\u0000w\u0000e\u0000r\u0000,\u0000 \u0000j\u0000a\u0000p\u0000a\u0000n\u0000 \u0000c\u0000a\u0000s\u0000t\u0000l\u0000e\u0000 \u0000i\u0000n\u0000 \u0000f\u0000r\u0000o\u0000n\u0000t\u0000 \u0000o\u0000f\u0000 \u0000f\u0000u\u0000l\u0000l\u0000 \u0000m\u0000o\u0000o\u0000n\u0000 \u0000c\u0000e\u0000n\u0000t\u0000e\u0000r\u0000 \u0000i\u0000n\u0000 \u0000f\u0000r\u0000a\u0000m\u0000e\u0000,\u0000 \u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000s\u0000a\u0000m\u0000u\u0000r\u0000a\u0000i\u0000L\u0000o\u0000r\u0000a\u0000V\u00000\u00001\u0000:\u00000\u0000.\u00008\u0000>\u0000,\u0000 \u0000P\u0000h\u0000o\u0000t\u0000o\u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000,\u0000 \u0000H\u0000y\u0000p\u0000e\u0000r\u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000,\u0000 \u0000H\u0000y\u0000p\u0000e\u0000r\u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000e\u0000d\u0000,\u0000 \u0000a\u0000n\u0000a\u0000l\u0000o\u0000g\u0000 \u0000s\u0000t\u0000y\u0000l\u0000e\u0000,\u0000 \u0000h\u0000i\u0000p\u0000 \u0000c\u0000o\u0000c\u0000k\u0000e\u0000d\u0000,\u0000 \u0000d\u0000e\u0000m\u0000u\u0000r\u0000e\u0000,\u0000 \u0000l\u0000o\u0000w\u0000 \u0000c\u0000u\u0000t\u0000,\u0000 \u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000e\u0000d\u0000 \u0000s\u0000k\u0000i\u0000n\u0000,\u0000 \u0000m\u0000a\u0000t\u0000t\u0000e\u0000 \u0000s\u0000k\u0000i\u0000n\u0000,\u0000 \u0000s\u0000o\u0000f\u0000t\u0000 \u0000l\u0000i\u0000g\u0000h\u0000t\u0000i\u0000n\u0000g\u0000,\u0000 \u0000s\u0000u\u0000b\u0000s\u0000u\u0000r\u0000f\u0000a\u0000c\u0000e\u0000 \u0000s\u0000c\u0000a\u0000t\u0000t\u0000e\u0000r\u0000i\u0000n\u0000g\u0000,\u0000 \u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000,\u0000 \u0000h\u0000e\u0000a\u0000v\u0000y\u0000 \u0000s\u0000h\u0000a\u0000d\u0000o\u0000w\u0000,\u0000 \u0000m\u0000a\u0000s\u0000t\u0000e\u0000r\u0000p\u0000i\u0000e\u0000c\u0000e\u0000,\u0000 \u0000b\u0000e\u0000s\u0000t\u0000 \u0000q\u0000u\u0000a\u0000l\u0000i\u0000t\u0000y\u0000,\u0000 \u0000u\u0000l\u0000t\u0000r\u0000a\u0000 \u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000,\u0000 \u00008\u0000k\u0000,\u0000 \u0000g\u0000o\u0000l\u0000d\u0000e\u0000n\u0000 \u0000r\u0000a\u0000t\u0000i\u0000o\u0000,\u0000 \u0000I\u0000n\u0000t\u0000r\u0000i\u0000c\u0000a\u0000t\u0000e\u0000,\u0000 \u0000H\u0000i\u0000g\u0000h\u0000 \u0000D\u0000e\u0000t\u0000a\u0000i\u0000l\u0000,\u0000 \u0000f\u0000i\u0000l\u0000m\u0000 \u0000p\u0000h\u0000o\u0000t\u0000o\u0000g\u0000r\u0000a\u0000p\u0000h\u0000y\u0000,\u0000 \u0000s\u0000o\u0000f\u0000t\u0000 \u0000f\u0000o\u0000c\u0000u\u0000s\u0000,\u0000 \u0000 \u0000b\u0000l\u0000u\u0000r\u0000r\u0000y\u0000 \u0000b\u0000a\u0000c\u0000k\u0000g\u0000r\u0000o\u0000u\u0000n\u0000d\u0000,\u0000", "output": {"url": "images/tmpgq75avu5.jpeg"}}], "base_model": "runwayml/stable-diffusion-v1-5", "instance_prompt": "samurai, armor"}
|
MarkBW/armor-samurai
| null |
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"region:us"
] | null |
2024-04-15T18:42:38+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us
|
# armor-samurai
<Gallery />
## Model description
Creates renders of Samurai armor by adhicipta
## Trigger words
You should use 'samurai' to trigger the image generation.
You should use 'armor' to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
|
[
"# armor-samurai\n\n<Gallery />",
"## Model description \n\nCreates renders of Samurai armor by adhicipta",
"## Trigger words\n\nYou should use 'samurai' to trigger the image generation.\n\nYou should use 'armor' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
[
"TAGS\n#diffusers #text-to-image #stable-diffusion #lora #template-sd-lora #base_model-runwayml/stable-diffusion-v1-5 #region-us \n",
"# armor-samurai\n\n<Gallery />",
"## Model description \n\nCreates renders of Samurai armor by adhicipta",
"## Trigger words\n\nYou should use 'samurai' to trigger the image generation.\n\nYou should use 'armor' to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab."
] |
image-classification
|
transformers
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.4315283000469208
f1_macro: 0.6149830093941424
f1_micro: 0.8602430555555556
f1_weighted: 0.8515059109185544
precision_macro: 0.7610988679415244
precision_micro: 0.8602430555555556
precision_weighted: 0.8532444856848228
recall_macro: 0.5527145295483504
recall_micro: 0.8602430555555556
recall_weighted: 0.8602430555555556
accuracy: 0.8602430555555556
|
{"tags": ["autotrain", "image-classification"], "datasets": ["xblock-large-patch2-224/autotrain-data"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
|
howdyaendra/xblock-large-patch2-224
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:xblock-large-patch2-224/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:44:05+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-large-patch2-224/autotrain-data #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.4315283000469208
f1_macro: 0.6149830093941424
f1_micro: 0.8602430555555556
f1_weighted: 0.8515059109185544
precision_macro: 0.7610988679415244
precision_micro: 0.8602430555555556
precision_weighted: 0.8532444856848228
recall_macro: 0.5527145295483504
recall_micro: 0.8602430555555556
recall_weighted: 0.8602430555555556
accuracy: 0.8602430555555556
|
[
"# Model Trained Using AutoTrain\n\n- Problem type: Image Classification",
"## Validation Metrics\nloss: 0.4315283000469208\n\nf1_macro: 0.6149830093941424\n\nf1_micro: 0.8602430555555556\n\nf1_weighted: 0.8515059109185544\n\nprecision_macro: 0.7610988679415244\n\nprecision_micro: 0.8602430555555556\n\nprecision_weighted: 0.8532444856848228\n\nrecall_macro: 0.5527145295483504\n\nrecall_micro: 0.8602430555555556\n\nrecall_weighted: 0.8602430555555556\n\naccuracy: 0.8602430555555556"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #autotrain #dataset-xblock-large-patch2-224/autotrain-data #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoTrain\n\n- Problem type: Image Classification",
"## Validation Metrics\nloss: 0.4315283000469208\n\nf1_macro: 0.6149830093941424\n\nf1_micro: 0.8602430555555556\n\nf1_weighted: 0.8515059109185544\n\nprecision_macro: 0.7610988679415244\n\nprecision_micro: 0.8602430555555556\n\nprecision_weighted: 0.8532444856848228\n\nrecall_macro: 0.5527145295483504\n\nrecall_micro: 0.8602430555555556\n\nrecall_weighted: 0.8602430555555556\n\naccuracy: 0.8602430555555556"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="djlouie/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
djlouie/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-15T18:46:10+00:00
|
[] |
[] |
TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
HenryCai1129/LlamaAdapter-llama2-happy-300-new
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:47:50+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_2-seqsight_4096_512_27M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_27M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_27M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6898
- F1 Score: 0.7169
- Accuracy: 0.717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6109 | 20.0 | 200 | 0.6314 | 0.6449 | 0.648 |
| 0.5226 | 40.0 | 400 | 0.6683 | 0.6520 | 0.653 |
| 0.4697 | 60.0 | 600 | 0.7173 | 0.6419 | 0.643 |
| 0.4277 | 80.0 | 800 | 0.7199 | 0.6776 | 0.678 |
| 0.3909 | 100.0 | 1000 | 0.7425 | 0.6832 | 0.684 |
| 0.3611 | 120.0 | 1200 | 0.7882 | 0.6900 | 0.69 |
| 0.3394 | 140.0 | 1400 | 0.8444 | 0.6860 | 0.686 |
| 0.3189 | 160.0 | 1600 | 0.8444 | 0.6764 | 0.677 |
| 0.299 | 180.0 | 1800 | 0.8485 | 0.6707 | 0.671 |
| 0.2793 | 200.0 | 2000 | 0.8956 | 0.6790 | 0.679 |
| 0.2616 | 220.0 | 2200 | 0.9033 | 0.6661 | 0.667 |
| 0.2497 | 240.0 | 2400 | 0.9799 | 0.6690 | 0.669 |
| 0.2334 | 260.0 | 2600 | 0.9636 | 0.6703 | 0.671 |
| 0.2187 | 280.0 | 2800 | 0.9732 | 0.6558 | 0.656 |
| 0.2081 | 300.0 | 3000 | 1.0578 | 0.6537 | 0.654 |
| 0.194 | 320.0 | 3200 | 1.0419 | 0.6690 | 0.669 |
| 0.1838 | 340.0 | 3400 | 1.0990 | 0.6577 | 0.658 |
| 0.1765 | 360.0 | 3600 | 1.0775 | 0.6660 | 0.666 |
| 0.1677 | 380.0 | 3800 | 1.1480 | 0.6639 | 0.664 |
| 0.1606 | 400.0 | 4000 | 1.1611 | 0.6577 | 0.658 |
| 0.1526 | 420.0 | 4200 | 1.1803 | 0.6628 | 0.663 |
| 0.1457 | 440.0 | 4400 | 1.1645 | 0.6668 | 0.667 |
| 0.1404 | 460.0 | 4600 | 1.2014 | 0.6592 | 0.66 |
| 0.1354 | 480.0 | 4800 | 1.2448 | 0.6680 | 0.668 |
| 0.1293 | 500.0 | 5000 | 1.2596 | 0.6667 | 0.667 |
| 0.125 | 520.0 | 5200 | 1.2830 | 0.664 | 0.664 |
| 0.1208 | 540.0 | 5400 | 1.2681 | 0.6709 | 0.671 |
| 0.1163 | 560.0 | 5600 | 1.2902 | 0.672 | 0.672 |
| 0.1128 | 580.0 | 5800 | 1.3111 | 0.6730 | 0.673 |
| 0.1082 | 600.0 | 6000 | 1.3644 | 0.6750 | 0.675 |
| 0.1054 | 620.0 | 6200 | 1.3422 | 0.6699 | 0.67 |
| 0.1015 | 640.0 | 6400 | 1.3672 | 0.6776 | 0.678 |
| 0.0993 | 660.0 | 6600 | 1.3445 | 0.674 | 0.674 |
| 0.0967 | 680.0 | 6800 | 1.3817 | 0.6726 | 0.673 |
| 0.0952 | 700.0 | 7000 | 1.4071 | 0.6790 | 0.679 |
| 0.093 | 720.0 | 7200 | 1.3856 | 0.6749 | 0.675 |
| 0.09 | 740.0 | 7400 | 1.4259 | 0.678 | 0.678 |
| 0.0868 | 760.0 | 7600 | 1.3913 | 0.6770 | 0.677 |
| 0.0861 | 780.0 | 7800 | 1.4341 | 0.6750 | 0.675 |
| 0.0846 | 800.0 | 8000 | 1.4084 | 0.6750 | 0.675 |
| 0.0828 | 820.0 | 8200 | 1.4084 | 0.6770 | 0.677 |
| 0.0819 | 840.0 | 8400 | 1.4444 | 0.6800 | 0.68 |
| 0.0797 | 860.0 | 8600 | 1.4824 | 0.6758 | 0.676 |
| 0.0794 | 880.0 | 8800 | 1.4446 | 0.6740 | 0.674 |
| 0.0774 | 900.0 | 9000 | 1.4435 | 0.6730 | 0.673 |
| 0.0762 | 920.0 | 9200 | 1.4719 | 0.6760 | 0.676 |
| 0.077 | 940.0 | 9400 | 1.4484 | 0.6799 | 0.68 |
| 0.0759 | 960.0 | 9600 | 1.4584 | 0.6820 | 0.682 |
| 0.0749 | 980.0 | 9800 | 1.4669 | 0.6840 | 0.684 |
| 0.076 | 1000.0 | 10000 | 1.4527 | 0.6810 | 0.681 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_27M", "model-index": [{"name": "GUE_tf_2-seqsight_4096_512_27M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_2-seqsight_4096_512_27M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_27M",
"region:us"
] | null |
2024-04-15T18:48:57+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us
|
GUE\_tf\_2-seqsight\_4096\_512\_27M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_27M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6898
* F1 Score: 0.7169
* Accuracy: 0.717
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_27M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
|
ASaska/Llama-2-7b-chat-hf
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null |
2024-04-15T18:49:18+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
[
"TAGS\n#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
thusinh1969/LLaMA-2-finetune-50k-ep1.42-DPO
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:51:44+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="djlouie/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
|
djlouie/q-Taxi-v3
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-15T18:51:44+00:00
|
[] |
[] |
TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
|
[
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
text-generation
|
transformers
|
# ECE-TW3-JRGL-VHF1
ECE-TW3-JRGL-VHF1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
## 🧩 Configuration
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "MTSAIR/MultiVerse_70B", "davidkim205/Rhea-72b-v0.5"]}
|
IAFrance/ECE-TW3-JRGL-VHF1
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:54:05+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ECE-TW3-JRGL-VHF1
ECE-TW3-JRGL-VHF1 is a merge of the following models using mergekit:
* MTSAIR/MultiVerse_70B
* davidkim205/Rhea-72b-v0.5
## Configuration
|
[
"# ECE-TW3-JRGL-VHF1\n\nECE-TW3-JRGL-VHF1 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ECE-TW3-JRGL-VHF1\n\nECE-TW3-JRGL-VHF1 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "270.22 +/- 17.02", "name": "mean_reward", "verified": false}]}]}]}
|
eulpicard/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T18:57:41+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/stablelm-35
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T18:58:32+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# [MaziyarPanahi/WizardLM-2-8x22B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-8x22B-GGUF)
- Model creator: [microsoft](https://huggingface.co/microsoft)
- Original model: [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
## Description
[MaziyarPanahi/WizardLM-2-8x22B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-8x22B-GGUF) contains GGUF format model files for [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B).
## How to download
You can download only the quants you need instead of cloning the entire repository as follows:
```
huggingface-cli download MaziyarPanahi/WizardLM-2-8x22B-GGUF --local-dir . --include '*Q2_K*gguf'
```
On Windows:
```sh
huggingface-cli download MaziyarPanahi/WizardLM-2-8x22B-GGUF --local-dir . --include *Q4_K_S*gguf
```
## Load sharded model
`llama_load_model_from_file` will detect the number of files and will load additional tensors from the rest of files.
```sh
llama.cpp/main -m WizardLM-2-8x22B.Q2_K-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e
```
## Prompt template
```
{system_prompt}
USER: {prompt}
ASSISTANT: </s>
```
or
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: {prompt} ASSISTANT: </s>......
```
|
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "transformers", "safetensors", "mistral", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "text-generation"], "model_name": "WizardLM-2-8x22B-GGUF", "base_model": "microsoft/WizardLM-2-8x22B", "inference": false, "model_creator": "microsoft", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
|
MaziyarPanahi/WizardLM-2-8x22B-GGUF
| null |
[
"transformers",
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"mistral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:microsoft/WizardLM-2-8x22B"
] | null |
2024-04-15T18:58:51+00:00
|
[
"2304.12244",
"2306.08568",
"2308.09583"
] |
[] |
TAGS
#transformers #gguf #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-microsoft/WizardLM-2-8x22B
|
# MaziyarPanahi/WizardLM-2-8x22B-GGUF
- Model creator: microsoft
- Original model: microsoft/WizardLM-2-8x22B
## Description
MaziyarPanahi/WizardLM-2-8x22B-GGUF contains GGUF format model files for microsoft/WizardLM-2-8x22B.
## How to download
You can download only the quants you need instead of cloning the entire repository as follows:
On Windows:
## Load sharded model
'llama_load_model_from_file' will detect the number of files and will load additional tensors from the rest of files.
## Prompt template
or
|
[
"# MaziyarPanahi/WizardLM-2-8x22B-GGUF\n- Model creator: microsoft\n- Original model: microsoft/WizardLM-2-8x22B",
"## Description\nMaziyarPanahi/WizardLM-2-8x22B-GGUF contains GGUF format model files for microsoft/WizardLM-2-8x22B.",
"## How to download\nYou can download only the quants you need instead of cloning the entire repository as follows:\n\n\n\n\nOn Windows:",
"## Load sharded model\n\n'llama_load_model_from_file' will detect the number of files and will load additional tensors from the rest of files.",
"## Prompt template\n\n\n\nor"
] |
[
"TAGS\n#transformers #gguf #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #GGUF #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #base_model-microsoft/WizardLM-2-8x22B \n",
"# MaziyarPanahi/WizardLM-2-8x22B-GGUF\n- Model creator: microsoft\n- Original model: microsoft/WizardLM-2-8x22B",
"## Description\nMaziyarPanahi/WizardLM-2-8x22B-GGUF contains GGUF format model files for microsoft/WizardLM-2-8x22B.",
"## How to download\nYou can download only the quants you need instead of cloning the entire repository as follows:\n\n\n\n\nOn Windows:",
"## Load sharded model\n\n'llama_load_model_from_file' will detect the number of files and will load additional tensors from the rest of files.",
"## Prompt template\n\n\n\nor"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
|
mergekit-community/mergekit-slerp-werhsur
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T18:59:21+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* NousResearch/Hermes-2-Pro-Mistral-7B
* WizardLM/WizardMath-7B-V1.1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
daanjiri/Biomistral_7b_bhc_5
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-15T19:03:29+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["autoquant", "gptq"]}
|
Pavan178/my-awesome-model-GPTQ
| null |
[
"transformers",
"safetensors",
"autoquant",
"gptq",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:07:03+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #autoquant #gptq #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #autoquant #gptq #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
peft
|
## Telugu LLaMA 7B Base Model for Causal LM(v1.0)
### Overview
Welcome to the release of the Telugu LLaMA 7B base model – a significant step forward in Language Learning Models (LLMs) for Telugu. This model is specifically designed for Causal Language Modeling (LM) tasks and is ready for immediate inference. It can also be fine-tuned for more specialized Natural Language Processing (NLP) applications.
### Key Features
- **Model Type:** 7B parameter model for Causal LM
- **Language:** Telugu
- **Vocabulary Size:** 50k tokens (32k original + 18k new added)
- **Training Data:** Smanathar Dataset (60k samples)
- **Source Model:** Llama7b-chat-hf
- **Training Precision:** float16
- **License:** MIT
- **Code:** https://github.com/prabhas2002/ENGTOTEL-Transalatron/tree/main/Llama2-7b
### Model Performance
- **Causal Language Modeling:** Generates fluent and contextually relevant Telugu text.
- **Fine-Tuning:** Primed for further fine-tuning on specific Telugu NLP tasks.
- **Multilingual Capability:** Capable of handling Telugu and potentially other languages.
#### Hugging Face Model Hub
- **Model Download:** Available on Hugging Face's model hub for download and offline use.
- **Model Pipelines:** Utilize through Hugging Face's pipelines for text generation and understanding tasks.
- **Fine-Tuning:** Customize the model for your specific Telugu NLP tasks by fine-tuning on relevant datasets.
### Citation
If you use this Telugu LLaMA 7B base model in your work, please cite it using the following BibTeX entry:
```bibtex
@article{PreTrained_Telugu_Llama7b,
title={Telugu LLaMA 7B Base Model for Causal LM},
author={Onteru Prabhas Reddy},
journal={Hugging Face Model Hub},
year={2024},
url=https://huggingface.co/Prabhas2002/PreTrained_Telugu_Llama7b
}
```
### License Information
Please refer to the license information provided with the model for details on usage and distribution.
|
{"language": ["te"], "license": "mit", "library_name": "peft", "datasets": ["uonlp/CulturaX", "ai4bharat/samanantar"], "pipeline_tag": "text2text-generation"}
|
Prabhas2002/PreTrained_Telugu_Llama7b
| null |
[
"peft",
"pytorch",
"llama",
"text2text-generation",
"te",
"dataset:uonlp/CulturaX",
"dataset:ai4bharat/samanantar",
"license:mit",
"region:us"
] | null |
2024-04-15T19:08:41+00:00
|
[] |
[
"te"
] |
TAGS
#peft #pytorch #llama #text2text-generation #te #dataset-uonlp/CulturaX #dataset-ai4bharat/samanantar #license-mit #region-us
|
## Telugu LLaMA 7B Base Model for Causal LM(v1.0)
### Overview
Welcome to the release of the Telugu LLaMA 7B base model – a significant step forward in Language Learning Models (LLMs) for Telugu. This model is specifically designed for Causal Language Modeling (LM) tasks and is ready for immediate inference. It can also be fine-tuned for more specialized Natural Language Processing (NLP) applications.
### Key Features
- Model Type: 7B parameter model for Causal LM
- Language: Telugu
- Vocabulary Size: 50k tokens (32k original + 18k new added)
- Training Data: Smanathar Dataset (60k samples)
- Source Model: Llama7b-chat-hf
- Training Precision: float16
- License: MIT
- Code: URL
### Model Performance
- Causal Language Modeling: Generates fluent and contextually relevant Telugu text.
- Fine-Tuning: Primed for further fine-tuning on specific Telugu NLP tasks.
- Multilingual Capability: Capable of handling Telugu and potentially other languages.
#### Hugging Face Model Hub
- Model Download: Available on Hugging Face's model hub for download and offline use.
- Model Pipelines: Utilize through Hugging Face's pipelines for text generation and understanding tasks.
- Fine-Tuning: Customize the model for your specific Telugu NLP tasks by fine-tuning on relevant datasets.
If you use this Telugu LLaMA 7B base model in your work, please cite it using the following BibTeX entry:
### License Information
Please refer to the license information provided with the model for details on usage and distribution.
|
[
"## Telugu LLaMA 7B Base Model for Causal LM(v1.0)",
"### Overview\n\nWelcome to the release of the Telugu LLaMA 7B base model – a significant step forward in Language Learning Models (LLMs) for Telugu. This model is specifically designed for Causal Language Modeling (LM) tasks and is ready for immediate inference. It can also be fine-tuned for more specialized Natural Language Processing (NLP) applications.",
"### Key Features\n\n- Model Type: 7B parameter model for Causal LM\n- Language: Telugu\n- Vocabulary Size: 50k tokens (32k original + 18k new added)\n- Training Data: Smanathar Dataset (60k samples)\n- Source Model: Llama7b-chat-hf\n- Training Precision: float16\n- License: MIT\n- Code: URL",
"### Model Performance\n\n- Causal Language Modeling: Generates fluent and contextually relevant Telugu text.\n- Fine-Tuning: Primed for further fine-tuning on specific Telugu NLP tasks.\n- Multilingual Capability: Capable of handling Telugu and potentially other languages.",
"#### Hugging Face Model Hub\n\n- Model Download: Available on Hugging Face's model hub for download and offline use.\n- Model Pipelines: Utilize through Hugging Face's pipelines for text generation and understanding tasks.\n- Fine-Tuning: Customize the model for your specific Telugu NLP tasks by fine-tuning on relevant datasets.\n\nIf you use this Telugu LLaMA 7B base model in your work, please cite it using the following BibTeX entry:",
"### License Information\n\nPlease refer to the license information provided with the model for details on usage and distribution."
] |
[
"TAGS\n#peft #pytorch #llama #text2text-generation #te #dataset-uonlp/CulturaX #dataset-ai4bharat/samanantar #license-mit #region-us \n",
"## Telugu LLaMA 7B Base Model for Causal LM(v1.0)",
"### Overview\n\nWelcome to the release of the Telugu LLaMA 7B base model – a significant step forward in Language Learning Models (LLMs) for Telugu. This model is specifically designed for Causal Language Modeling (LM) tasks and is ready for immediate inference. It can also be fine-tuned for more specialized Natural Language Processing (NLP) applications.",
"### Key Features\n\n- Model Type: 7B parameter model for Causal LM\n- Language: Telugu\n- Vocabulary Size: 50k tokens (32k original + 18k new added)\n- Training Data: Smanathar Dataset (60k samples)\n- Source Model: Llama7b-chat-hf\n- Training Precision: float16\n- License: MIT\n- Code: URL",
"### Model Performance\n\n- Causal Language Modeling: Generates fluent and contextually relevant Telugu text.\n- Fine-Tuning: Primed for further fine-tuning on specific Telugu NLP tasks.\n- Multilingual Capability: Capable of handling Telugu and potentially other languages.",
"#### Hugging Face Model Hub\n\n- Model Download: Available on Hugging Face's model hub for download and offline use.\n- Model Pipelines: Utilize through Hugging Face's pipelines for text generation and understanding tasks.\n- Fine-Tuning: Customize the model for your specific Telugu NLP tasks by fine-tuning on relevant datasets.\n\nIf you use this Telugu LLaMA 7B base model in your work, please cite it using the following BibTeX entry:",
"### License Information\n\nPlease refer to the license information provided with the model for details on usage and distribution."
] |
null |
transformers
|
# Uploaded model
- **Developed by:** lomashirl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"}
|
lomashirl/Gemma-2b-Alpaca-Gujarati
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:11:33+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: lomashirl
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: lomashirl\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: lomashirl\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# CalmexperimentOgnoexperiment27multi_verse_model-7B
CalmexperimentOgnoexperiment27multi_verse_model-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [automerger/Ognoexperiment27Multi_verse_model-7B](https://huggingface.co/automerger/Ognoexperiment27Multi_verse_model-7B)
## 🧩 Configuration
```yaml
models:
- model: allknowingroger/CalmExperiment-7B-slerp
# No parameters necessary for base model
- model: automerger/Ognoexperiment27Multi_verse_model-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: allknowingroger/CalmExperiment-7B-slerp
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/CalmexperimentOgnoexperiment27multi_verse_model-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["automerger/Ognoexperiment27Multi_verse_model-7B"]}
|
automerger/CalmexperimentOgnoexperiment27multi_verse_model-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:automerger/Ognoexperiment27Multi_verse_model-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T19:11:56+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-automerger/Ognoexperiment27Multi_verse_model-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CalmexperimentOgnoexperiment27multi_verse_model-7B
CalmexperimentOgnoexperiment27multi_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.
* automerger/Ognoexperiment27Multi_verse_model-7B
## Configuration
## Usage
|
[
"# CalmexperimentOgnoexperiment27multi_verse_model-7B\n\nCalmexperimentOgnoexperiment27multi_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.\n* automerger/Ognoexperiment27Multi_verse_model-7B",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-automerger/Ognoexperiment27Multi_verse_model-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CalmexperimentOgnoexperiment27multi_verse_model-7B\n\nCalmexperimentOgnoexperiment27multi_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.\n* automerger/Ognoexperiment27Multi_verse_model-7B",
"## Configuration",
"## Usage"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_tata-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9208
- F1 Score: 0.6312
- Accuracy: 0.6313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5453 | 66.67 | 200 | 0.9218 | 0.6548 | 0.6558 |
| 0.2372 | 133.33 | 400 | 1.3049 | 0.6183 | 0.6183 |
| 0.1345 | 200.0 | 600 | 1.5320 | 0.6250 | 0.6248 |
| 0.0902 | 266.67 | 800 | 1.7346 | 0.6115 | 0.6150 |
| 0.0663 | 333.33 | 1000 | 1.8558 | 0.6202 | 0.6199 |
| 0.0547 | 400.0 | 1200 | 2.0401 | 0.6283 | 0.6281 |
| 0.0438 | 466.67 | 1400 | 2.1493 | 0.6361 | 0.6362 |
| 0.0378 | 533.33 | 1600 | 2.3435 | 0.6246 | 0.6248 |
| 0.0335 | 600.0 | 1800 | 2.4545 | 0.6201 | 0.6199 |
| 0.0295 | 666.67 | 2000 | 2.5264 | 0.6299 | 0.6297 |
| 0.028 | 733.33 | 2200 | 2.2992 | 0.6299 | 0.6297 |
| 0.0258 | 800.0 | 2400 | 2.3492 | 0.6430 | 0.6427 |
| 0.0237 | 866.67 | 2600 | 2.3384 | 0.6271 | 0.6281 |
| 0.0222 | 933.33 | 2800 | 2.6912 | 0.6296 | 0.6313 |
| 0.0214 | 1000.0 | 3000 | 2.7081 | 0.6216 | 0.6215 |
| 0.0198 | 1066.67 | 3200 | 2.4575 | 0.6251 | 0.6248 |
| 0.0193 | 1133.33 | 3400 | 2.6720 | 0.6495 | 0.6493 |
| 0.019 | 1200.0 | 3600 | 2.5038 | 0.6413 | 0.6411 |
| 0.0175 | 1266.67 | 3800 | 2.3496 | 0.6413 | 0.6411 |
| 0.0164 | 1333.33 | 4000 | 2.5110 | 0.6445 | 0.6444 |
| 0.0162 | 1400.0 | 4200 | 2.7291 | 0.6283 | 0.6281 |
| 0.0151 | 1466.67 | 4400 | 2.5535 | 0.6397 | 0.6395 |
| 0.0152 | 1533.33 | 4600 | 2.8508 | 0.6347 | 0.6346 |
| 0.0144 | 1600.0 | 4800 | 2.8463 | 0.6278 | 0.6281 |
| 0.0138 | 1666.67 | 5000 | 2.5506 | 0.6457 | 0.6460 |
| 0.0131 | 1733.33 | 5200 | 2.5626 | 0.6364 | 0.6362 |
| 0.0121 | 1800.0 | 5400 | 2.8232 | 0.6397 | 0.6395 |
| 0.0117 | 1866.67 | 5600 | 2.7807 | 0.6397 | 0.6395 |
| 0.0112 | 1933.33 | 5800 | 2.7485 | 0.6312 | 0.6313 |
| 0.0113 | 2000.0 | 6000 | 2.8893 | 0.6330 | 0.6330 |
| 0.0113 | 2066.67 | 6200 | 2.8597 | 0.6414 | 0.6411 |
| 0.01 | 2133.33 | 6400 | 2.9061 | 0.6475 | 0.6476 |
| 0.0104 | 2200.0 | 6600 | 2.7424 | 0.6380 | 0.6378 |
| 0.0101 | 2266.67 | 6800 | 2.8906 | 0.6361 | 0.6362 |
| 0.0094 | 2333.33 | 7000 | 2.6584 | 0.6314 | 0.6313 |
| 0.0095 | 2400.0 | 7200 | 2.9275 | 0.6446 | 0.6444 |
| 0.0092 | 2466.67 | 7400 | 2.9618 | 0.6348 | 0.6346 |
| 0.0092 | 2533.33 | 7600 | 2.8459 | 0.6443 | 0.6444 |
| 0.0084 | 2600.0 | 7800 | 2.8913 | 0.6445 | 0.6444 |
| 0.0085 | 2666.67 | 8000 | 2.7676 | 0.6390 | 0.6395 |
| 0.0081 | 2733.33 | 8200 | 2.9487 | 0.6347 | 0.6346 |
| 0.008 | 2800.0 | 8400 | 2.8634 | 0.6413 | 0.6411 |
| 0.0077 | 2866.67 | 8600 | 3.0542 | 0.6381 | 0.6378 |
| 0.0079 | 2933.33 | 8800 | 2.9114 | 0.6397 | 0.6395 |
| 0.0074 | 3000.0 | 9000 | 3.0313 | 0.6364 | 0.6362 |
| 0.0076 | 3066.67 | 9200 | 2.7971 | 0.6397 | 0.6395 |
| 0.0072 | 3133.33 | 9400 | 2.9732 | 0.6413 | 0.6411 |
| 0.0075 | 3200.0 | 9600 | 2.8531 | 0.6315 | 0.6313 |
| 0.0071 | 3266.67 | 9800 | 2.8677 | 0.6364 | 0.6362 |
| 0.0069 | 3333.33 | 10000 | 2.9255 | 0.6348 | 0.6346 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T19:12:16+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_prom\_prom\_300\_tata-seqsight\_4096\_512\_46M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9208
* F1 Score: 0.6312
* Accuracy: 0.6313
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
|
ASaska/Llama-2-7b-chat-hf-ft
| null |
[
"peft",
"pytorch",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null |
2024-04-15T19:15:40+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #pytorch #llama #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#peft #pytorch #llama #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5126
- F1 Score: 0.8820
- Accuracy: 0.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5091 | 7.14 | 200 | 0.3937 | 0.8242 | 0.8248 |
| 0.3814 | 14.29 | 400 | 0.3645 | 0.8393 | 0.8393 |
| 0.3212 | 21.43 | 600 | 0.3130 | 0.8660 | 0.8662 |
| 0.273 | 28.57 | 800 | 0.3178 | 0.8726 | 0.8726 |
| 0.2457 | 35.71 | 1000 | 0.3158 | 0.8733 | 0.8734 |
| 0.2242 | 42.86 | 1200 | 0.3394 | 0.8666 | 0.8666 |
| 0.2041 | 50.0 | 1400 | 0.3359 | 0.8726 | 0.8726 |
| 0.1873 | 57.14 | 1600 | 0.3383 | 0.8741 | 0.8741 |
| 0.1732 | 64.29 | 1800 | 0.3676 | 0.8751 | 0.8751 |
| 0.1627 | 71.43 | 2000 | 0.3641 | 0.8745 | 0.8745 |
| 0.1533 | 78.57 | 2200 | 0.3743 | 0.8728 | 0.8728 |
| 0.1438 | 85.71 | 2400 | 0.3660 | 0.8745 | 0.8745 |
| 0.1364 | 92.86 | 2600 | 0.3773 | 0.8751 | 0.8751 |
| 0.1291 | 100.0 | 2800 | 0.4157 | 0.8739 | 0.8739 |
| 0.1226 | 107.14 | 3000 | 0.3914 | 0.8721 | 0.8721 |
| 0.1182 | 114.29 | 3200 | 0.4390 | 0.8678 | 0.8679 |
| 0.1137 | 121.43 | 3400 | 0.4201 | 0.8733 | 0.8734 |
| 0.1068 | 128.57 | 3600 | 0.4551 | 0.8714 | 0.8715 |
| 0.1038 | 135.71 | 3800 | 0.4237 | 0.8745 | 0.8745 |
| 0.0995 | 142.86 | 4000 | 0.4398 | 0.8667 | 0.8668 |
| 0.0964 | 150.0 | 4200 | 0.4678 | 0.8686 | 0.8687 |
| 0.0918 | 157.14 | 4400 | 0.4565 | 0.8762 | 0.8762 |
| 0.0888 | 164.29 | 4600 | 0.4532 | 0.8751 | 0.8751 |
| 0.0863 | 171.43 | 4800 | 0.4944 | 0.8676 | 0.8677 |
| 0.0822 | 178.57 | 5000 | 0.4730 | 0.8739 | 0.8739 |
| 0.0805 | 185.71 | 5200 | 0.4573 | 0.8762 | 0.8762 |
| 0.079 | 192.86 | 5400 | 0.4927 | 0.8729 | 0.8730 |
| 0.0763 | 200.0 | 5600 | 0.4990 | 0.8724 | 0.8724 |
| 0.0731 | 207.14 | 5800 | 0.4750 | 0.8769 | 0.8770 |
| 0.0717 | 214.29 | 6000 | 0.5008 | 0.8790 | 0.8790 |
| 0.0709 | 221.43 | 6200 | 0.4993 | 0.8733 | 0.8734 |
| 0.0689 | 228.57 | 6400 | 0.4999 | 0.8764 | 0.8764 |
| 0.0669 | 235.71 | 6600 | 0.5127 | 0.8718 | 0.8719 |
| 0.0662 | 242.86 | 6800 | 0.4918 | 0.8779 | 0.8779 |
| 0.0634 | 250.0 | 7000 | 0.5051 | 0.8737 | 0.8738 |
| 0.064 | 257.14 | 7200 | 0.5058 | 0.8748 | 0.8749 |
| 0.0606 | 264.29 | 7400 | 0.5412 | 0.8689 | 0.8690 |
| 0.0608 | 271.43 | 7600 | 0.5068 | 0.8798 | 0.8798 |
| 0.06 | 278.57 | 7800 | 0.4909 | 0.8770 | 0.8770 |
| 0.0585 | 285.71 | 8000 | 0.5263 | 0.8756 | 0.8756 |
| 0.058 | 292.86 | 8200 | 0.5130 | 0.8787 | 0.8787 |
| 0.0571 | 300.0 | 8400 | 0.4988 | 0.8809 | 0.8809 |
| 0.0556 | 307.14 | 8600 | 0.5262 | 0.8786 | 0.8787 |
| 0.0557 | 314.29 | 8800 | 0.5292 | 0.8803 | 0.8803 |
| 0.0546 | 321.43 | 9000 | 0.5193 | 0.8799 | 0.8800 |
| 0.0547 | 328.57 | 9200 | 0.5151 | 0.8805 | 0.8805 |
| 0.053 | 335.71 | 9400 | 0.5267 | 0.8779 | 0.8779 |
| 0.0524 | 342.86 | 9600 | 0.5293 | 0.8781 | 0.8781 |
| 0.0531 | 350.0 | 9800 | 0.5254 | 0.8775 | 0.8775 |
| 0.0523 | 357.14 | 10000 | 0.5279 | 0.8777 | 0.8777 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T19:16:45+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_46M-L32\_all
==============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5126
* F1 Score: 0.8820
* Accuracy: 0.8820
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# JSL-MedMNX-7B
[<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com)
JSL-MedMNX-7B is a 7 Billion parameter model developed by [John Snow Labs](https://www.johnsnowlabs.com/).
This model is trained on medical datasets to provide state-of-the-art performance on biomedical benchmarks: [Open Medical LLM Leaderboard](https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard).
This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at [email protected].
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "johnsnowlabs/JSL-MedMNX-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.5191|± |0.0068|
| | |none | 0|acc |0.5658|± |0.0058|
| - medmcqa |Yaml |none | 0|acc |0.5135|± |0.0077|
| | |none | 0|acc_norm|0.5135|± |0.0077|
| - medqa_4options |Yaml |none | 0|acc |0.5373|± |0.0140|
| | |none | 0|acc_norm|0.5373|± |0.0140|
| - anatomy (mmlu) | 0|none | 0|acc |0.6370|± |0.0415|
| - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7245|± |0.0275|
| - college_biology (mmlu) | 0|none | 0|acc |0.7500|± |0.0362|
| - college_medicine (mmlu) | 0|none | 0|acc |0.6590|± |0.0361|
| - medical_genetics (mmlu) | 0|none | 0|acc |0.7200|± |0.0451|
| - professional_medicine (mmlu)| 0|none | 0|acc |0.7206|± |0.0273|
| - pubmedqa | 1|none | 0|acc |0.7720|± |0.0188|
|Groups|Version|Filter|n-shot| Metric |Value | |Stderr|
|------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.5191|± |0.0068|
| | |none | 0|acc |0.5658|± |0.0058|
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "library_name": "transformers", "tags": ["reward model", "RLHF", "medical"]}
|
johnsnowlabs/JSL-MedMNX-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"reward model",
"RLHF",
"medical",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T19:18:02+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #reward model #RLHF #medical #conversational #en #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
JSL-MedMNX-7B
=============
<img src="URL
JSL-MedMNX-7B is a 7 Billion parameter model developed by John Snow Labs.
This model is trained on medical datasets to provide state-of-the-art performance on biomedical benchmarks: Open Medical LLM Leaderboard.
This model is available under a CC-BY-NC-ND license and must also conform to this Acceptable Use Policy. If you need to license this model for commercial use, please contact us at info@URL.
Usage
-----
Evaluation
----------
|
[] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #reward model #RLHF #medical #conversational #en #license-cc-by-nc-nd-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
## Exllama v2 Quantizations of wavecoder-ultra-6.7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/microsoft/wavecoder-ultra-6.7b
## Prompt format
This seems to follow the DeepSeek coder format, aka Alpaca.
```
{system_prompt}
### Instruction: {prompt}
### Response:
```
## Available sizes
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-exl2/tree/8_0) | 8.0 | 8.0 | 9.0 GB | 15.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-exl2/tree/6_5) | 6.5 | 8.0 | 8.2 GB | 14.4 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-exl2/tree/5_0) | 5.0 | 6.0 | 6.8 GB | 13.0 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-exl2/tree/4_25) | 4.25 | 6.0 | 6.1 GB | 12.3 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-exl2/tree/3_5) | 3.5 | 6.0 | 5.5 GB | 11.7 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/wavecoder-ultra-6.7b-exl2 wavecoder-ultra-6.7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/wavecoder-ultra-6.7b-exl2 --revision 6_5 --local-dir wavecoder-ultra-6.7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/wavecoder-ultra-6.7b-exl2 --revision 6_5 --local-dir wavecoder-ultra-6.7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
{"license": "other", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_name": "deepseek", "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
|
bartowski/wavecoder-ultra-6.7b-exl2
| null |
[
"transformers",
"code",
"text-generation",
"dataset:humaneval",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:18:07+00:00
|
[] |
[] |
TAGS
#transformers #code #text-generation #dataset-humaneval #license-other #endpoints_compatible #region-us
|
Exllama v2 Quantizations of wavecoder-ultra-6.7b
------------------------------------------------
Using <a href="URL ExLlamaV2 v0.0.18 for quantization.
**The "main" branch only contains the URL, download one of the other branches for the model (see below)**
Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions.
Original model: URL
Prompt format
-------------
This seems to follow the DeepSeek coder format, aka Alpaca.
Available sizes
---------------
No GQA - VRAM requirements will be higher
Download instructions
---------------------
With git:
With huggingface hub (credit to TheBloke for instructions):
To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch:
Linux:
Windows (which apparently doesn't like \_ in folders sometimes?):
Want to support my work? Visit my ko-fi page here: URL
|
[] |
[
"TAGS\n#transformers #code #text-generation #dataset-humaneval #license-other #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1369
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1533 | 0.8310 |
| 0.1274 | 2.0 | 1050 | 0.1328 | 0.8534 |
| 0.0802 | 3.0 | 1575 | 0.1369 | 0.8633 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
|
AAA01101312/xlm-roberta-base-finetuned-panx-de
| null |
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:18:35+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-base-finetuned-panx-de
==================================
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1369
* F1: 0.8633
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 24
* eval\_batch\_size: 24
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0
* Pytorch 2.2.1+cpu
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.2.1+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
This is an ExLlamaV2 quantized model in 4bpw of [mpasila/PIPPA-Named-7B](https://huggingface.co/mpasila/PIPPA-Named-7B) using the default calibration dataset.
# Original Model card:
This is a merge of [mpasila/PIPPA-Named-LoRA-7B](https://huggingface.co/mpasila/PIPPA-Named-LoRA-7B/).
LoRA trained in 4-bit with 8k context using [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/) as the base model for 1 epoch.
Dataset used is [a modified](https://huggingface.co/datasets/mpasila/PIPPA-ShareGPT-formatted-named) version of [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted).
### Prompt format: ChatML
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "not-for-all-audiences"], "datasets": ["mpasila/PIPPA-ShareGPT-formatted-named", "KaraKaraWitch/PIPPA-ShareGPT-formatted"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"}
|
mpasila/PIPPA-Named-7B-exl2-4bpw
| null |
[
"transformers",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"not-for-all-audiences",
"conversational",
"en",
"dataset:mpasila/PIPPA-ShareGPT-formatted-named",
"dataset:KaraKaraWitch/PIPPA-ShareGPT-formatted",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:24:32+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #mistral #text-generation #text-generation-inference #unsloth #trl #sft #not-for-all-audiences #conversational #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This is an ExLlamaV2 quantized model in 4bpw of mpasila/PIPPA-Named-7B using the default calibration dataset.
# Original Model card:
This is a merge of mpasila/PIPPA-Named-LoRA-7B.
LoRA trained in 4-bit with 8k context using alpindale/Mistral-7B-v0.2-hf as the base model for 1 epoch.
Dataset used is a modified version of KaraKaraWitch/PIPPA-ShareGPT-formatted.
### Prompt format: ChatML
# Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Original Model card:\n\nThis is a merge of mpasila/PIPPA-Named-LoRA-7B.\n\nLoRA trained in 4-bit with 8k context using alpindale/Mistral-7B-v0.2-hf as the base model for 1 epoch.\n\nDataset used is a modified version of KaraKaraWitch/PIPPA-ShareGPT-formatted.",
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #mistral #text-generation #text-generation-inference #unsloth #trl #sft #not-for-all-audiences #conversational #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Original Model card:\n\nThis is a merge of mpasila/PIPPA-Named-LoRA-7B.\n\nLoRA trained in 4-bit with 8k context using alpindale/Mistral-7B-v0.2-hf as the base model for 1 epoch.\n\nDataset used is a modified version of KaraKaraWitch/PIPPA-ShareGPT-formatted.",
"### Prompt format: ChatML",
"# Uploaded model\n\n- Developed by: mpasila\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# interpro_bert_2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 128
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 2048
- total_eval_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.2702 | 1.0 | 14395 | 1.1699 |
| 0.9079 | 2.0 | 28790 | 0.8665 |
| 0.7738 | 3.0 | 43185 | 0.7505 |
| 0.6959 | 4.0 | 57580 | 0.6820 |
| 0.6327 | 5.0 | 71975 | 0.6302 |
| 0.5899 | 6.0 | 86370 | 0.5956 |
| 0.5462 | 7.0 | 100765 | 0.5654 |
| 0.5155 | 8.0 | 115160 | 0.5395 |
| 0.4836 | 9.0 | 129555 | 0.5149 |
| 0.4633 | 10.0 | 143950 | 0.4984 |
| 0.441 | 11.0 | 158345 | 0.4774 |
| 0.4212 | 12.0 | 172740 | 0.4641 |
| 0.404 | 13.0 | 187135 | 0.4479 |
| 0.3883 | 14.0 | 201530 | 0.4401 |
| 0.3781 | 15.0 | 215925 | 0.4333 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "interpro_bert_2", "results": []}]}
|
Dauka-transformers/interpro_bert_2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:25:17+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
interpro\_bert\_2
=================
This model is a fine-tuned version of [](URL on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4333
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 256
* eval\_batch\_size: 128
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* total\_train\_batch\_size: 2048
* total\_eval\_batch\_size: 1024
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.39.2
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 128\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 2048\n* total\\_eval\\_batch\\_size: 1024\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 256\n* eval\\_batch\\_size: 128\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 2048\n* total\\_eval\\_batch\\_size: 1024\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# ECE-TW3-JRGL-VHF2
ECE-TW3-JRGL-VHF2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
## 🧩 Configuration
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "abacusai/Smaug-72B-v0.1", "davidkim205/Rhea-72b-v0.5"]}
|
IAFrance/ECE-TW3-JRGL-VHF2
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"abacusai/Smaug-72B-v0.1",
"davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T19:28:16+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #abacusai/Smaug-72B-v0.1 #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ECE-TW3-JRGL-VHF2
ECE-TW3-JRGL-VHF2 is a merge of the following models using mergekit:
* abacusai/Smaug-72B-v0.1
* davidkim205/Rhea-72b-v0.5
## Configuration
|
[
"# ECE-TW3-JRGL-VHF2\n\nECE-TW3-JRGL-VHF2 is a merge of the following models using mergekit:\n* abacusai/Smaug-72B-v0.1\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #abacusai/Smaug-72B-v0.1 #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ECE-TW3-JRGL-VHF2\n\nECE-TW3-JRGL-VHF2 is a merge of the following models using mergekit:\n* abacusai/Smaug-72B-v0.1\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5938
- F1 Score: 0.7235
- Accuracy: 0.7235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6182 | 8.33 | 200 | 0.5799 | 0.7004 | 0.7005 |
| 0.5471 | 16.67 | 400 | 0.5648 | 0.7167 | 0.7169 |
| 0.5207 | 25.0 | 600 | 0.5568 | 0.7231 | 0.7231 |
| 0.4942 | 33.33 | 800 | 0.5667 | 0.7275 | 0.7275 |
| 0.4722 | 41.67 | 1000 | 0.5737 | 0.7310 | 0.7311 |
| 0.4559 | 50.0 | 1200 | 0.5810 | 0.7300 | 0.7304 |
| 0.4395 | 58.33 | 1400 | 0.5864 | 0.7330 | 0.7331 |
| 0.4262 | 66.67 | 1600 | 0.5956 | 0.7341 | 0.7343 |
| 0.4157 | 75.0 | 1800 | 0.5947 | 0.7253 | 0.7257 |
| 0.403 | 83.33 | 2000 | 0.6085 | 0.7326 | 0.7328 |
| 0.3909 | 91.67 | 2200 | 0.6191 | 0.7241 | 0.7247 |
| 0.3805 | 100.0 | 2400 | 0.6430 | 0.7301 | 0.7301 |
| 0.3677 | 108.33 | 2600 | 0.6442 | 0.7321 | 0.7321 |
| 0.3571 | 116.67 | 2800 | 0.6825 | 0.7279 | 0.7285 |
| 0.3452 | 125.0 | 3000 | 0.6755 | 0.7255 | 0.7262 |
| 0.3339 | 133.33 | 3200 | 0.7027 | 0.7270 | 0.7275 |
| 0.3254 | 141.67 | 3400 | 0.7100 | 0.7139 | 0.7157 |
| 0.3139 | 150.0 | 3600 | 0.6953 | 0.7218 | 0.7221 |
| 0.3055 | 158.33 | 3800 | 0.7117 | 0.7172 | 0.7184 |
| 0.2961 | 166.67 | 4000 | 0.7463 | 0.7276 | 0.7280 |
| 0.2887 | 175.0 | 4200 | 0.7548 | 0.7144 | 0.7160 |
| 0.281 | 183.33 | 4400 | 0.7449 | 0.7227 | 0.7231 |
| 0.2712 | 191.67 | 4600 | 0.7825 | 0.7201 | 0.7209 |
| 0.2648 | 200.0 | 4800 | 0.7807 | 0.7207 | 0.7216 |
| 0.2585 | 208.33 | 5000 | 0.7717 | 0.7162 | 0.7169 |
| 0.2513 | 216.67 | 5200 | 0.7949 | 0.7119 | 0.7130 |
| 0.2466 | 225.0 | 5400 | 0.8211 | 0.7185 | 0.7189 |
| 0.2388 | 233.33 | 5600 | 0.8178 | 0.7202 | 0.7208 |
| 0.2336 | 241.67 | 5800 | 0.8418 | 0.7117 | 0.7128 |
| 0.2289 | 250.0 | 6000 | 0.8372 | 0.7177 | 0.7181 |
| 0.2233 | 258.33 | 6200 | 0.8382 | 0.7138 | 0.7144 |
| 0.2198 | 266.67 | 6400 | 0.8580 | 0.7144 | 0.7150 |
| 0.2151 | 275.0 | 6600 | 0.8456 | 0.7034 | 0.7042 |
| 0.2116 | 283.33 | 6800 | 0.8620 | 0.7121 | 0.7127 |
| 0.2075 | 291.67 | 7000 | 0.8774 | 0.7126 | 0.7133 |
| 0.2037 | 300.0 | 7200 | 0.8830 | 0.7148 | 0.7154 |
| 0.1998 | 308.33 | 7400 | 0.9136 | 0.7094 | 0.7111 |
| 0.1969 | 316.67 | 7600 | 0.9050 | 0.7095 | 0.7105 |
| 0.1929 | 325.0 | 7800 | 0.9307 | 0.7087 | 0.7103 |
| 0.1909 | 333.33 | 8000 | 0.9043 | 0.7102 | 0.7110 |
| 0.1881 | 341.67 | 8200 | 0.9205 | 0.7104 | 0.7110 |
| 0.1858 | 350.0 | 8400 | 0.9081 | 0.7115 | 0.7118 |
| 0.1853 | 358.33 | 8600 | 0.9076 | 0.7118 | 0.7123 |
| 0.183 | 366.67 | 8800 | 0.9139 | 0.7097 | 0.7105 |
| 0.1819 | 375.0 | 9000 | 0.9223 | 0.7106 | 0.7115 |
| 0.1807 | 383.33 | 9200 | 0.9208 | 0.7109 | 0.7117 |
| 0.1778 | 391.67 | 9400 | 0.9295 | 0.7113 | 0.7120 |
| 0.1766 | 400.0 | 9600 | 0.9374 | 0.7086 | 0.7095 |
| 0.1771 | 408.33 | 9800 | 0.9362 | 0.7111 | 0.7118 |
| 0.1761 | 416.67 | 10000 | 0.9329 | 0.7099 | 0.7106 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T19:29:00+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_46M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5938
* F1 Score: 0.7235
* Accuracy: 0.7235
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/microsoft/WizardLM-2-8x22B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/WizardLM-2-8x22B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q2_K.gguf.part2of2) | Q2_K | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ3_XS.gguf.part2of2) | IQ3_XS | 58.3 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ3_S.gguf.part2of2) | IQ3_S | 61.6 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q3_K_S.gguf.part2of2) | Q3_K_S | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ3_M.gguf.part2of2) | IQ3_M | 64.6 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q3_K_M.gguf.part2of2) | Q3_K_M | 67.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.7 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.IQ4_XS.gguf.part2of2) | IQ4_XS | 76.5 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q4_K_S.gguf.part2of2) | Q4_K_S | 80.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q4_K_M.gguf.part2of2) | Q4_K_M | 85.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q5_K_S.gguf.part2of2) | Q5_K_S | 97.1 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q5_K_M.gguf.part3of3) | Q5_K_M | 100.1 | |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/WizardLM-2-8x22B-GGUF/resolve/main/WizardLM-2-8x22B.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "microsoft/WizardLM-2-8x22B", "quantized_by": "mradermacher"}
|
mradermacher/WizardLM-2-8x22B-GGUF
| null |
[
"transformers",
"en",
"base_model:microsoft/WizardLM-2-8x22B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:30:03+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5786
- F1 Score: 0.7502
- Accuracy: 0.7503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6092 | 9.52 | 200 | 0.5435 | 0.7328 | 0.7328 |
| 0.5298 | 19.05 | 400 | 0.5265 | 0.7444 | 0.7445 |
| 0.4955 | 28.57 | 600 | 0.5266 | 0.7485 | 0.7486 |
| 0.4611 | 38.1 | 800 | 0.5344 | 0.7486 | 0.7494 |
| 0.4347 | 47.62 | 1000 | 0.5794 | 0.7413 | 0.7443 |
| 0.4151 | 57.14 | 1200 | 0.5621 | 0.7507 | 0.7509 |
| 0.3952 | 66.67 | 1400 | 0.5767 | 0.7517 | 0.7518 |
| 0.3804 | 76.19 | 1600 | 0.6054 | 0.7432 | 0.7443 |
| 0.362 | 85.71 | 1800 | 0.5979 | 0.7394 | 0.7409 |
| 0.3482 | 95.24 | 2000 | 0.6450 | 0.7411 | 0.7428 |
| 0.3332 | 104.76 | 2200 | 0.6271 | 0.7400 | 0.7409 |
| 0.3165 | 114.29 | 2400 | 0.6711 | 0.7377 | 0.7388 |
| 0.3019 | 123.81 | 2600 | 0.6708 | 0.7344 | 0.7351 |
| 0.2888 | 133.33 | 2800 | 0.7001 | 0.7348 | 0.7356 |
| 0.2775 | 142.86 | 3000 | 0.6883 | 0.7294 | 0.7304 |
| 0.2656 | 152.38 | 3200 | 0.7333 | 0.7365 | 0.7370 |
| 0.2542 | 161.9 | 3400 | 0.7556 | 0.7267 | 0.7281 |
| 0.2439 | 171.43 | 3600 | 0.7635 | 0.7245 | 0.7262 |
| 0.2346 | 180.95 | 3800 | 0.8163 | 0.7270 | 0.7288 |
| 0.2266 | 190.48 | 4000 | 0.7688 | 0.7273 | 0.7279 |
| 0.2178 | 200.0 | 4200 | 0.7957 | 0.7310 | 0.7315 |
| 0.209 | 209.52 | 4400 | 0.7971 | 0.7234 | 0.7245 |
| 0.2019 | 219.05 | 4600 | 0.8542 | 0.7214 | 0.7226 |
| 0.1967 | 228.57 | 4800 | 0.8340 | 0.7220 | 0.7232 |
| 0.189 | 238.1 | 5000 | 0.8609 | 0.7210 | 0.7224 |
| 0.182 | 247.62 | 5200 | 0.8595 | 0.7279 | 0.7288 |
| 0.1794 | 257.14 | 5400 | 0.8615 | 0.7239 | 0.7251 |
| 0.1727 | 266.67 | 5600 | 0.8990 | 0.7248 | 0.7262 |
| 0.1684 | 276.19 | 5800 | 0.9335 | 0.7224 | 0.7238 |
| 0.1643 | 285.71 | 6000 | 0.9007 | 0.7223 | 0.7236 |
| 0.1592 | 295.24 | 6200 | 0.9250 | 0.7224 | 0.7236 |
| 0.1555 | 304.76 | 6400 | 0.9373 | 0.7199 | 0.7215 |
| 0.1517 | 314.29 | 6600 | 0.9494 | 0.7214 | 0.7228 |
| 0.1491 | 323.81 | 6800 | 0.9250 | 0.7227 | 0.7238 |
| 0.1451 | 333.33 | 7000 | 0.9283 | 0.7259 | 0.7272 |
| 0.1428 | 342.86 | 7200 | 0.9701 | 0.7192 | 0.7211 |
| 0.1406 | 352.38 | 7400 | 0.9416 | 0.7247 | 0.7256 |
| 0.1367 | 361.9 | 7600 | 0.9528 | 0.7254 | 0.7264 |
| 0.1342 | 371.43 | 7800 | 0.9628 | 0.7183 | 0.7196 |
| 0.1326 | 380.95 | 8000 | 0.9838 | 0.7181 | 0.7194 |
| 0.1302 | 390.48 | 8200 | 0.9635 | 0.7221 | 0.7228 |
| 0.1292 | 400.0 | 8400 | 0.9910 | 0.7218 | 0.7232 |
| 0.1265 | 409.52 | 8600 | 0.9985 | 0.7241 | 0.7253 |
| 0.125 | 419.05 | 8800 | 0.9976 | 0.7196 | 0.7207 |
| 0.1244 | 428.57 | 9000 | 0.9970 | 0.7214 | 0.7224 |
| 0.1221 | 438.1 | 9200 | 0.9941 | 0.7238 | 0.7247 |
| 0.1224 | 447.62 | 9400 | 0.9951 | 0.7218 | 0.7228 |
| 0.1205 | 457.14 | 9600 | 1.0036 | 0.7209 | 0.7221 |
| 0.12 | 466.67 | 9800 | 1.0060 | 0.7196 | 0.7209 |
| 0.1207 | 476.19 | 10000 | 1.0023 | 0.7198 | 0.7209 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T19:36:29+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_46M-L32\_all
===============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5786
* F1 Score: 0.7502
* Accuracy: 0.7503
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# ECE-TW3-JRGL-VHF3
ECE-TW3-JRGL-VHF3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
## 🧩 Configuration
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "MTSAIR/MultiVerse_70B", "davidkim205/Rhea-72b-v0.5"]}
|
IAFrance/ECE-TW3-JRGL-VHF3
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"davidkim205/Rhea-72b-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T19:39:32+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ECE-TW3-JRGL-VHF3
ECE-TW3-JRGL-VHF3 is a merge of the following models using mergekit:
* MTSAIR/MultiVerse_70B
* davidkim205/Rhea-72b-v0.5
## Configuration
|
[
"# ECE-TW3-JRGL-VHF3\n\nECE-TW3-JRGL-VHF3 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #davidkim205/Rhea-72b-v0.5 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ECE-TW3-JRGL-VHF3\n\nECE-TW3-JRGL-VHF3 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* davidkim205/Rhea-72b-v0.5",
"## Configuration"
] |
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - shc/us-election-style
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of US president election using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true, "instance_prompt": "a photo of US president election"}
|
shc/us-election-style
| null |
[
"diffusers",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | null |
2024-04-15T19:39:54+00:00
|
[] |
[] |
TAGS
#diffusers #text-to-image #lora #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us
|
# LoRA DreamBooth - shc/us-election-style
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of US president election using DreamBooth. You can find some example images in the following.
!img_0
!img_1
!img_2
!img_3
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
[
"# LoRA DreamBooth - shc/us-election-style\n\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of US president election using DreamBooth. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nLoRA for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
[
"TAGS\n#diffusers #text-to-image #lora #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #region-us \n",
"# LoRA DreamBooth - shc/us-election-style\n\nThese are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of US president election using DreamBooth. You can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nLoRA for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
heyllm234/sc26
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:39:55+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
thdangtr/blip_recipe1m_title_v2
| null |
[
"transformers",
"safetensors",
"blip",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:40:23+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
image-to-text
|
transformers
|
A pre trained ViT and GPT2 is fine tuned on flickr8k dataset.
|
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "image-to-text"}
|
arunmadhusudh/Vit-gpt2-flickr8k
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:41:01+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #vision-encoder-decoder #image-to-text #en #license-apache-2.0 #endpoints_compatible #region-us
|
A pre trained ViT and GPT2 is fine tuned on flickr8k dataset.
|
[] |
[
"TAGS\n#transformers #pytorch #vision-encoder-decoder #image-to-text #en #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "276.63 +/- 19.37", "name": "mean_reward", "verified": false}]}]}]}
|
MLIsaac/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-15T19:41:23+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - samahadhoud/the_word_octopus_in_arabic__LoRA
<Gallery />
## Model description
These are samahadhoud/the_word_octopus_in_arabic__LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use the word octopus in arabic to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](samahadhoud/the_word_octopus_in_arabic__LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "the word octopus in arabic", "widget": []}
|
samahadhoud/the_word_octopus_in_arabic__LoRA
| null |
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null |
2024-04-15T19:41:33+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
|
# SDXL LoRA DreamBooth - samahadhoud/the_word_octopus_in_arabic__LoRA
<Gallery />
## Model description
These are samahadhoud/the_word_octopus_in_arabic__LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use the word octopus in arabic to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
[
"# SDXL LoRA DreamBooth - samahadhoud/the_word_octopus_in_arabic__LoRA\n\n<Gallery />",
"## Model description\n\nThese are samahadhoud/the_word_octopus_in_arabic__LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use the word octopus in arabic to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
[
"TAGS\n#diffusers #tensorboard #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n",
"# SDXL LoRA DreamBooth - samahadhoud/the_word_octopus_in_arabic__LoRA\n\n<Gallery />",
"## Model description\n\nThese are samahadhoud/the_word_octopus_in_arabic__LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Trigger words\n\nYou should use the word octopus in arabic to trigger the image generation.",
"## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5489
- F1 Score: 0.7731
- Accuracy: 0.7732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4824 | 66.67 | 200 | 0.6560 | 0.7683 | 0.7684 |
| 0.2171 | 133.33 | 400 | 0.8833 | 0.7700 | 0.7700 |
| 0.1317 | 200.0 | 600 | 0.9867 | 0.7581 | 0.7586 |
| 0.087 | 266.67 | 800 | 1.2142 | 0.7520 | 0.7520 |
| 0.0615 | 333.33 | 1000 | 1.3831 | 0.7537 | 0.7537 |
| 0.0474 | 400.0 | 1200 | 1.2747 | 0.7584 | 0.7586 |
| 0.0386 | 466.67 | 1400 | 1.4030 | 0.7618 | 0.7618 |
| 0.0322 | 533.33 | 1600 | 1.5060 | 0.7577 | 0.7586 |
| 0.0279 | 600.0 | 1800 | 1.4513 | 0.7569 | 0.7569 |
| 0.0244 | 666.67 | 2000 | 1.5587 | 0.7629 | 0.7635 |
| 0.0227 | 733.33 | 2200 | 1.6524 | 0.7563 | 0.7569 |
| 0.0198 | 800.0 | 2400 | 1.4060 | 0.7716 | 0.7716 |
| 0.0184 | 866.67 | 2600 | 1.5090 | 0.7683 | 0.7684 |
| 0.017 | 933.33 | 2800 | 1.5537 | 0.7651 | 0.7651 |
| 0.0157 | 1000.0 | 3000 | 1.3996 | 0.7683 | 0.7684 |
| 0.0147 | 1066.67 | 3200 | 1.6654 | 0.7543 | 0.7553 |
| 0.0138 | 1133.33 | 3400 | 1.4623 | 0.7634 | 0.7635 |
| 0.0133 | 1200.0 | 3600 | 1.5517 | 0.7732 | 0.7732 |
| 0.0122 | 1266.67 | 3800 | 1.6368 | 0.7684 | 0.7684 |
| 0.0114 | 1333.33 | 4000 | 1.5825 | 0.7665 | 0.7667 |
| 0.0115 | 1400.0 | 4200 | 1.6032 | 0.7667 | 0.7667 |
| 0.0111 | 1466.67 | 4400 | 1.6989 | 0.7731 | 0.7732 |
| 0.0104 | 1533.33 | 4600 | 1.6114 | 0.7846 | 0.7847 |
| 0.0099 | 1600.0 | 4800 | 1.6247 | 0.7781 | 0.7781 |
| 0.0094 | 1666.67 | 5000 | 1.5045 | 0.7683 | 0.7684 |
| 0.0089 | 1733.33 | 5200 | 1.5489 | 0.7842 | 0.7847 |
| 0.0089 | 1800.0 | 5400 | 1.5515 | 0.7863 | 0.7863 |
| 0.0086 | 1866.67 | 5600 | 1.8477 | 0.7645 | 0.7651 |
| 0.0085 | 1933.33 | 5800 | 1.6564 | 0.7814 | 0.7814 |
| 0.0082 | 2000.0 | 6000 | 1.6358 | 0.7715 | 0.7716 |
| 0.0082 | 2066.67 | 6200 | 1.6093 | 0.7846 | 0.7847 |
| 0.0075 | 2133.33 | 6400 | 1.5894 | 0.7928 | 0.7928 |
| 0.0073 | 2200.0 | 6600 | 1.7057 | 0.7847 | 0.7847 |
| 0.0073 | 2266.67 | 6800 | 1.6344 | 0.7798 | 0.7798 |
| 0.007 | 2333.33 | 7000 | 1.6579 | 0.7843 | 0.7847 |
| 0.0067 | 2400.0 | 7200 | 1.7407 | 0.7861 | 0.7863 |
| 0.0068 | 2466.67 | 7400 | 1.6310 | 0.7780 | 0.7781 |
| 0.0069 | 2533.33 | 7600 | 1.6412 | 0.7847 | 0.7847 |
| 0.0066 | 2600.0 | 7800 | 1.6736 | 0.7732 | 0.7732 |
| 0.0063 | 2666.67 | 8000 | 1.7840 | 0.7830 | 0.7830 |
| 0.006 | 2733.33 | 8200 | 1.7175 | 0.7781 | 0.7781 |
| 0.0063 | 2800.0 | 8400 | 1.6354 | 0.7845 | 0.7847 |
| 0.0061 | 2866.67 | 8600 | 1.5902 | 0.7879 | 0.7879 |
| 0.0061 | 2933.33 | 8800 | 1.6325 | 0.7863 | 0.7863 |
| 0.0056 | 3000.0 | 9000 | 1.7472 | 0.7863 | 0.7863 |
| 0.0054 | 3066.67 | 9200 | 1.7139 | 0.7797 | 0.7798 |
| 0.0059 | 3133.33 | 9400 | 1.7371 | 0.7830 | 0.7830 |
| 0.0054 | 3200.0 | 9600 | 1.7256 | 0.7781 | 0.7781 |
| 0.0055 | 3266.67 | 9800 | 1.7328 | 0.7830 | 0.7830 |
| 0.0053 | 3333.33 | 10000 | 1.7291 | 0.7798 | 0.7798 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T19:41:35+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_46M-L32\_all
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5489
* F1 Score: 0.7731
* Accuracy: 0.7732
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
keras
|
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
{"library_name": "keras"}
|
anrhi/mobile_v2__fake_image_Xception_detection
| null |
[
"keras",
"region:us"
] | null |
2024-04-15T19:46:24+00:00
|
[] |
[] |
TAGS
#keras #region-us
|
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:"
] |
[
"TAGS\n#keras #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.001-len_3-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.001-len_3-filtered-negative", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.001-len_3-filtered-negative
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T19:48:55+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.001-len_3-filtered-negative
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.001-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.001-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
sentence-similarity
|
sentence-transformers
|
# atasoglu/distilbert-base-turkish-cased-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was adapted from [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) and fine-tuned on these datasets:
- [nli_tr](https://huggingface.co/datasets/nli_tr)
- [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('atasoglu/distilbert-base-turkish-cased-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('atasoglu/distilbert-base-turkish-cased-nli-stsb-tr')
model = AutoModel.from_pretrained('atasoglu/distilbert-base-turkish-cased-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below:
```txt
Cosine-Similarity : Pearson: 0.8167 Spearman: 0.8158
Manhattan-Distance: Pearson: 0.7540 Spearman: 0.7463
Euclidean-Distance: Pearson: 0.7545 Spearman: 0.7470
Dot-Product-Similarity: Pearson: 0.6543 Spearman: 0.6571
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 18,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 45,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"language": ["tr"], "license": "mit", "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["nli_tr", "emrecan/stsb-mt-turkish"], "pipeline_tag": "sentence-similarity", "base_model": "dbmdz/distilbert-base-turkish-cased"}
|
atasoglu/distilbert-base-turkish-cased-nli-stsb-tr
| null |
[
"sentence-transformers",
"safetensors",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"tr",
"dataset:nli_tr",
"dataset:emrecan/stsb-mt-turkish",
"base_model:dbmdz/distilbert-base-turkish-cased",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:49:44+00:00
|
[] |
[
"tr"
] |
TAGS
#sentence-transformers #safetensors #distilbert #feature-extraction #sentence-similarity #transformers #tr #dataset-nli_tr #dataset-emrecan/stsb-mt-turkish #base_model-dbmdz/distilbert-base-turkish-cased #license-mit #endpoints_compatible #region-us
|
# atasoglu/distilbert-base-turkish-cased-nli-stsb-tr
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was adapted from dbmdz/distilbert-base-turkish-cased and fine-tuned on these datasets:
- nli_tr
- emrecan/stsb-mt-turkish
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
Achieved results on the STS-b test split are given below:
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 180 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# atasoglu/distilbert-base-turkish-cased-nli-stsb-tr\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis model was adapted from dbmdz/distilbert-base-turkish-cased and fine-tuned on these datasets:\n- nli_tr\n- emrecan/stsb-mt-turkish",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\nAchieved results on the STS-b test split are given below:",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 180 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #distilbert #feature-extraction #sentence-similarity #transformers #tr #dataset-nli_tr #dataset-emrecan/stsb-mt-turkish #base_model-dbmdz/distilbert-base-turkish-cased #license-mit #endpoints_compatible #region-us \n",
"# atasoglu/distilbert-base-turkish-cased-nli-stsb-tr\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.\n\nThis model was adapted from dbmdz/distilbert-base-turkish-cased and fine-tuned on these datasets:\n- nli_tr\n- emrecan/stsb-mt-turkish",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\nAchieved results on the STS-b test split are given below:",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 180 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
null | null |
# Aryanne/WizardLM-2-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/WizardLM-2-7B`](https://huggingface.co/microsoft/WizardLM-2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/WizardLM-2-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Aryanne/WizardLM-2-7B-Q4_K_M-GGUF --model wizardlm-2-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Aryanne/WizardLM-2-7B-Q4_K_M-GGUF --model wizardlm-2-7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m wizardlm-2-7b.Q4_K_M.gguf -n 128
```
|
{"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
|
Aryanne/WizardLM-2-7B-Q4_K_M-GGUF
| null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null |
2024-04-15T19:51:43+00:00
|
[] |
[] |
TAGS
#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# Aryanne/WizardLM-2-7B-Q4_K_M-GGUF
This model was converted to GGUF format from 'microsoft/WizardLM-2-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# Aryanne/WizardLM-2-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'microsoft/WizardLM-2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# Aryanne/WizardLM-2-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'microsoft/WizardLM-2-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "results", "results": []}]}
|
AbinSingh/results
| null |
[
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"region:us"
] | null |
2024-04-15T19:53:10+00:00
|
[] |
[] |
TAGS
#safetensors #trl #sft #generated_from_trainer #region-us
|
# results
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#safetensors #trl #sft #generated_from_trainer #region-us \n",
"# results\n\nThis model was trained from scratch on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
## Llamacpp Quantizations of wavecoder-ultra-6.7b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2675">b2675</a> for quantization.
Original model: https://huggingface.co/microsoft/wavecoder-ultra-6.7b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
This seems to follow the DeepSeek coder format, aka Alpaca.
```
{system_prompt}
### Instruction: {prompt}
### Response:
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [wavecoder-ultra-6.7b-Q8_0.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q8_0.gguf) | Q8_0 | 7.16GB | Extremely high quality, generally unneeded but max available quant. |
| [wavecoder-ultra-6.7b-Q6_K.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q6_K.gguf) | Q6_K | 5.52GB | Very high quality, near perfect, *recommended*. |
| [wavecoder-ultra-6.7b-Q5_K_M.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q5_K_M.gguf) | Q5_K_M | 4.78GB | High quality, *recommended*. |
| [wavecoder-ultra-6.7b-Q5_K_S.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q5_K_S.gguf) | Q5_K_S | 4.65GB | High quality, *recommended*. |
| [wavecoder-ultra-6.7b-Q4_K_M.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q4_K_M.gguf) | Q4_K_M | 4.08GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [wavecoder-ultra-6.7b-Q4_K_S.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q4_K_S.gguf) | Q4_K_S | 3.85GB | Slightly lower quality with more space savings, *recommended*. |
| [wavecoder-ultra-6.7b-IQ4_NL.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ4_NL.gguf) | IQ4_NL | 3.82GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [wavecoder-ultra-6.7b-IQ4_XS.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ4_XS.gguf) | IQ4_XS | 3.62GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [wavecoder-ultra-6.7b-Q3_K_L.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q3_K_L.gguf) | Q3_K_L | 3.59GB | Lower quality but usable, good for low RAM availability. |
| [wavecoder-ultra-6.7b-Q3_K_M.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
| [wavecoder-ultra-6.7b-IQ3_M.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ3_M.gguf) | IQ3_M | 3.11GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [wavecoder-ultra-6.7b-IQ3_S.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ3_S.gguf) | IQ3_S | 2.94GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [wavecoder-ultra-6.7b-Q3_K_S.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q3_K_S.gguf) | Q3_K_S | 2.94GB | Low quality, not recommended. |
| [wavecoder-ultra-6.7b-IQ3_XS.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ3_XS.gguf) | IQ3_XS | 2.79GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [wavecoder-ultra-6.7b-IQ3_XXS.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ3_XXS.gguf) | IQ3_XXS | 2.58GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [wavecoder-ultra-6.7b-Q2_K.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-Q2_K.gguf) | Q2_K | 2.53GB | Very low quality but surprisingly usable. |
| [wavecoder-ultra-6.7b-IQ2_M.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ2_M.gguf) | IQ2_M | 2.36GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [wavecoder-ultra-6.7b-IQ2_S.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ2_S.gguf) | IQ2_S | 2.19GB | Very low quality, uses SOTA techniques to be usable. |
| [wavecoder-ultra-6.7b-IQ2_XS.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ2_XS.gguf) | IQ2_XS | 2.03GB | Very low quality, uses SOTA techniques to be usable. |
| [wavecoder-ultra-6.7b-IQ2_XXS.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ2_XXS.gguf) | IQ2_XXS | 1.85GB | Lower quality, uses SOTA techniques to be usable. |
| [wavecoder-ultra-6.7b-IQ1_M.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ1_M.gguf) | IQ1_M | 1.65GB | Extremely low quality, *not* recommended. |
| [wavecoder-ultra-6.7b-IQ1_S.gguf](https://huggingface.co/bartowski/wavecoder-ultra-6.7b-GGUF/blob/main/wavecoder-ultra-6.7b-IQ1_S.gguf) | IQ1_S | 1.52GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
{"license": "other", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_name": "deepseek", "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
|
bartowski/wavecoder-ultra-6.7b-GGUF
| null |
[
"transformers",
"gguf",
"code",
"text-generation",
"dataset:humaneval",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:54:23+00:00
|
[] |
[] |
TAGS
#transformers #gguf #code #text-generation #dataset-humaneval #license-other #endpoints_compatible #region-us
|
Llamacpp Quantizations of wavecoder-ultra-6.7b
----------------------------------------------
Using <a href="URL release <a href="URL for quantization.
Original model: URL
All quants made using imatrix option with dataset provided by Kalomaze here
Prompt format
-------------
This seems to follow the DeepSeek coder format, aka Alpaca.
Download a file (not the whole branch) from below:
--------------------------------------------------
Which file should I choose?
---------------------------
A great write up with charts showing various performances is provided by Artefact2 here
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
URL feature matrix
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: URL
|
[] |
[
"TAGS\n#transformers #gguf #code #text-generation #dataset-humaneval #license-other #endpoints_compatible #region-us \n"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Action_model
This model is a fine-tuned version of [Raihan004/Action_model](https://huggingface.co/Raihan004/Action_model) on the action_class dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6130
- Accuracy: 0.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.255 | 0.37 | 100 | 0.7616 | 0.7926 |
| 0.2048 | 0.75 | 200 | 0.7247 | 0.8084 |
| 0.3763 | 1.12 | 300 | 0.6130 | 0.8330 |
| 0.307 | 1.49 | 400 | 0.8137 | 0.7891 |
| 0.3542 | 1.87 | 500 | 0.6612 | 0.8014 |
| 0.3518 | 2.24 | 600 | 0.6965 | 0.8190 |
| 0.3706 | 2.61 | 700 | 0.7254 | 0.8049 |
| 0.4084 | 2.99 | 800 | 0.6746 | 0.8102 |
| 0.2533 | 3.36 | 900 | 0.6867 | 0.8190 |
| 0.3147 | 3.73 | 1000 | 0.7077 | 0.8190 |
| 0.3182 | 4.1 | 1100 | 0.6661 | 0.8190 |
| 0.2248 | 4.48 | 1200 | 0.6632 | 0.8418 |
| 0.1617 | 4.85 | 1300 | 0.7277 | 0.8172 |
| 0.2578 | 5.22 | 1400 | 0.7114 | 0.8190 |
| 0.1864 | 5.6 | 1500 | 0.7554 | 0.8172 |
| 0.3134 | 5.97 | 1600 | 0.7593 | 0.8155 |
| 0.24 | 6.34 | 1700 | 0.7511 | 0.8260 |
| 0.2359 | 6.72 | 1800 | 0.7502 | 0.8137 |
| 0.2322 | 7.09 | 1900 | 0.6953 | 0.8348 |
| 0.1514 | 7.46 | 2000 | 0.7121 | 0.8260 |
| 0.2089 | 7.84 | 2100 | 0.6931 | 0.8278 |
| 0.2245 | 8.21 | 2200 | 0.7087 | 0.8330 |
| 0.1328 | 8.58 | 2300 | 0.7003 | 0.8313 |
| 0.1304 | 8.96 | 2400 | 0.7306 | 0.8225 |
| 0.1514 | 9.33 | 2500 | 0.7162 | 0.8260 |
| 0.2571 | 9.7 | 2600 | 0.7013 | 0.8348 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "Raihan004/Action_model", "model-index": [{"name": "Action_model", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "action_class", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8330404217926186, "name": "Accuracy"}]}]}]}
|
Raihan004/Action_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:Raihan004/Action_model",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T19:56:07+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-Raihan004/Action_model #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
Action\_model
=============
This model is a fine-tuned version of Raihan004/Action\_model on the action\_class dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6130
* Accuracy: 0.8330
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-imagefolder #base_model-Raihan004/Action_model #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["trl", "sft"], "datasets": ["mlabonne/guanaco-llama2-1k"], "pipeline_tag": "text-generation"}
|
AbinSingh/mistral_7b_guanaco_finetuned
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"dataset:mlabonne/guanaco-llama2-1k",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:02:39+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #dataset-mlabonne/guanaco-llama2-1k #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #dataset-mlabonne/guanaco-llama2-1k #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-7b-hf-platypus-lamini-vxxiii-chat-enhanced
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-7b", "model-index": [{"name": "gemma-7b-hf-platypus-lamini-vxxiii-chat-enhanced", "results": []}]}
|
NassimB/gemma-7b-hf-platypus-lamini-vxxiii-chat-enhanced
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-7b",
"license:gemma",
"region:us"
] | null |
2024-04-15T20:03:17+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-7b #license-gemma #region-us
|
# gemma-7b-hf-platypus-lamini-vxxiii-chat-enhanced
This model is a fine-tuned version of google/gemma-7b on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
[
"# gemma-7b-hf-platypus-lamini-vxxiii-chat-enhanced\n\nThis model is a fine-tuned version of google/gemma-7b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-7b #license-gemma #region-us \n",
"# gemma-7b-hf-platypus-lamini-vxxiii-chat-enhanced\n\nThis model is a fine-tuned version of google/gemma-7b on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.38.0\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
] |
null | null |
# Mistral based NIDS
This repository contains an implementation of a Network Intrusion Detection System (NIDS) based on the Mistral Large Language Model (LLM). The system is designed to detect and classify network attacks using natural language processing techniques.
## Overview
- **LLM**:
- The NIDS is built using the Mistral LLM, a powerful language model that enables the system to understand and analyze network traffic logs.
- Another LLM, Llama2, was fine-tuned and the performance of the two were compared. The link to my implementation of Llama2-based can be found [here](https://huggingface.co/caffeinatedcherrychic/Llama2-based-NIDS).
- **Dataset**: The system is trained and evaluated on the CIDDS dataset, which includes various types of network attacks such as DoS, PortScan, Brute Force, and PingScan.
- **Training**: The LLM is fine-tuned on the CIDDS dataset after it was pre-processed using the [NTFA tool](https://github.com/KayvanKarim/ntfa) to learn the patterns and characteristics of different network attacks.
- **Inference**: The trained model is used to classify network traffic logs in real-time, identifying potential attacks and generating alerts.
## Results
The mistral-based NIDS achieves a higher detection rate with lower false positives, demonstrating the effectiveness of using LLMs for network intrusion detection. With access to computational resources for longer periods, It's performance could further be improved.
|
{}
|
caffeinatedcherrychic/mistral-based-NIDS-old
| null |
[
"tensorboard",
"safetensors",
"region:us"
] | null |
2024-04-15T20:05:26+00:00
|
[] |
[] |
TAGS
#tensorboard #safetensors #region-us
|
# Mistral based NIDS
This repository contains an implementation of a Network Intrusion Detection System (NIDS) based on the Mistral Large Language Model (LLM). The system is designed to detect and classify network attacks using natural language processing techniques.
## Overview
- LLM:
- The NIDS is built using the Mistral LLM, a powerful language model that enables the system to understand and analyze network traffic logs.
- Another LLM, Llama2, was fine-tuned and the performance of the two were compared. The link to my implementation of Llama2-based can be found here.
- Dataset: The system is trained and evaluated on the CIDDS dataset, which includes various types of network attacks such as DoS, PortScan, Brute Force, and PingScan.
- Training: The LLM is fine-tuned on the CIDDS dataset after it was pre-processed using the NTFA tool to learn the patterns and characteristics of different network attacks.
- Inference: The trained model is used to classify network traffic logs in real-time, identifying potential attacks and generating alerts.
## Results
The mistral-based NIDS achieves a higher detection rate with lower false positives, demonstrating the effectiveness of using LLMs for network intrusion detection. With access to computational resources for longer periods, It's performance could further be improved.
|
[
"# Mistral based NIDS\n\nThis repository contains an implementation of a Network Intrusion Detection System (NIDS) based on the Mistral Large Language Model (LLM). The system is designed to detect and classify network attacks using natural language processing techniques.",
"## Overview\n- LLM: \n - The NIDS is built using the Mistral LLM, a powerful language model that enables the system to understand and analyze network traffic logs.\n - Another LLM, Llama2, was fine-tuned and the performance of the two were compared. The link to my implementation of Llama2-based can be found here.\n- Dataset: The system is trained and evaluated on the CIDDS dataset, which includes various types of network attacks such as DoS, PortScan, Brute Force, and PingScan.\n- Training: The LLM is fine-tuned on the CIDDS dataset after it was pre-processed using the NTFA tool to learn the patterns and characteristics of different network attacks.\n- Inference: The trained model is used to classify network traffic logs in real-time, identifying potential attacks and generating alerts.",
"## Results\nThe mistral-based NIDS achieves a higher detection rate with lower false positives, demonstrating the effectiveness of using LLMs for network intrusion detection. With access to computational resources for longer periods, It's performance could further be improved."
] |
[
"TAGS\n#tensorboard #safetensors #region-us \n",
"# Mistral based NIDS\n\nThis repository contains an implementation of a Network Intrusion Detection System (NIDS) based on the Mistral Large Language Model (LLM). The system is designed to detect and classify network attacks using natural language processing techniques.",
"## Overview\n- LLM: \n - The NIDS is built using the Mistral LLM, a powerful language model that enables the system to understand and analyze network traffic logs.\n - Another LLM, Llama2, was fine-tuned and the performance of the two were compared. The link to my implementation of Llama2-based can be found here.\n- Dataset: The system is trained and evaluated on the CIDDS dataset, which includes various types of network attacks such as DoS, PortScan, Brute Force, and PingScan.\n- Training: The LLM is fine-tuned on the CIDDS dataset after it was pre-processed using the NTFA tool to learn the patterns and characteristics of different network attacks.\n- Inference: The trained model is used to classify network traffic logs in real-time, identifying potential attacks and generating alerts.",
"## Results\nThe mistral-based NIDS achieves a higher detection rate with lower false positives, demonstrating the effectiveness of using LLMs for network intrusion detection. With access to computational resources for longer periods, It's performance could further be improved."
] |
text-generation
|
transformers
|
# SambaLingo-Arabic-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Arabic-Chat-70B is a human aligned chat model trained in Arabic and English. It is trained using direct preference optimization on top the base model [SambaLingo-Arabic-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Arabic by training on 28 billion tokens from the Arabic split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Arabic, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Arabic-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Arabic-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Arabic-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
كيف تصف الالوان لشخص ولد أعمى</s>
<|assistant|>
ليس لدي القدرة على الرؤية، ولكن يمكنني تقديم وصف للألوان بناءً على ما تعلمته من البيانات.\n\nبالنسبة لشخص ولد أعمى، قد يكون من الصعب وصف الألوان لأنها لم تختبرها من قبل. ومع ذلك، يمكننا استخدام الاستعارات والمقارنات لمساعدة هذا الشخص على فهم الألوان. على سبيل المثال، يمكننا وصف اللون الأحمر بأنه دافئ وحيوي، مثل لهب النار أو ثمرة الفراولة الناضجة. يمكننا وصف اللون الأزرق بأنه هادئ ومهدئ، مثل السماء أو المحيط. يمكننا وصف اللون الأخضر بأنه منعش وطبيعي، مثل أوراق الشجر أو العشب. يمكننا وصف اللون الأصفر بأنه مشرق ومبهج، مثل الشمس أو زهرة عباد الشمس. يمكننا وصف اللون الأرجواني بأنه ملكي وفاخر، مثل لون الجواهر أو غروب الشمس. في النهاية، قد يكون من الصعب وصف الألوان لشخص ولد أعمى، ولكن باستخدام الاستعارات والمقارنات، يمكننا مساعدتهم على فهم الفروق الدقيقة في الألوان بشكل أفضل.
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["ar", "en"], "license": "llama2", "datasets": ["HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized", "HuggingFaceH4/cai-conversation-harmless"]}
|
sambanovasystems/SambaLingo-Arabic-Chat-70B
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"ar",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:06:06+00:00
|
[
"2404.05829"
] |
[
"ar",
"en"
] |
TAGS
#transformers #pytorch #llama #text-generation #conversational #ar #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #dataset-HuggingFaceH4/cai-conversation-harmless #arxiv-2404.05829 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SambaLingo-Arabic-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
SambaLingo-Arabic-Chat-70B is a human aligned chat model trained in Arabic and English. It is trained using direct preference optimization on top the base model SambaLingo-Arabic-Base-70B. The base model adapts Llama-2-70b to Arabic by training on 28 billion tokens from the Arabic split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.
## Model Description
- Developed by: SambaNova Systems
- Model type: Language Model
- Language(s): Arabic, English
- Finetuned from model: Llama-2-70b
- Paper: SambaLingo: Teaching Large Language Models New Languages
- Blog Post: sambalingo-open-source-language-experts
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
### Example Prompts and Generations
## Training Details
The alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages
## Uses
### Direct Use
Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
|
[
"# SambaLingo-Arabic-Chat-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Arabic-Chat-70B is a human aligned chat model trained in Arabic and English. It is trained using direct preference optimization on top the base model SambaLingo-Arabic-Base-70B. The base model adapts Llama-2-70b to Arabic by training on 28 billion tokens from the Arabic split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Arabic, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Interacting With Model Pipeline\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Suggested Inference Parameters\n- Temperature: 0.8\n- Repetition penalty: 1.0\n- Top-p: 0.9",
"### Prompting Guidelines\nTo prompt this model, please use the following chat template:",
"### Example Prompts and Generations",
"## Training Details\nThe alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).\n\nThe SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.\n\nThe DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #conversational #ar #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #dataset-HuggingFaceH4/cai-conversation-harmless #arxiv-2404.05829 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SambaLingo-Arabic-Chat-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Arabic-Chat-70B is a human aligned chat model trained in Arabic and English. It is trained using direct preference optimization on top the base model SambaLingo-Arabic-Base-70B. The base model adapts Llama-2-70b to Arabic by training on 28 billion tokens from the Arabic split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Arabic, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Interacting With Model Pipeline\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Suggested Inference Parameters\n- Temperature: 0.8\n- Repetition penalty: 1.0\n- Top-p: 0.9",
"### Prompting Guidelines\nTo prompt this model, please use the following chat template:",
"### Example Prompts and Generations",
"## Training Details\nThe alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).\n\nThe SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.\n\nThe DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
text-generation
|
transformers
|
# SambaLingo-Hungarian-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Hungarian-Chat-70B is a human aligned chat model trained in Hungarian and English. It is trained using direct preference optimization on top the base model [SambaLingo-Hungarian-Base](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Base). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Hungarian by training on 19 billion tokens from the Hungarian split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Hungarian, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hfΩ)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Hungarian-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
Írj át minden nevet <anonim>-ra a következő bekezdésben:\nNagy Róbert felkelt, megitta reggeli kávéját, elköszönt feleségétől, Évától és beült autójába. Késésben volt, ezért gyorsabban vezzetett a megengedettnél, aminek következményeképp Pesti Károly ezredes megállította. Végül egy bírsággal megúszta, de a munkából már bőven elkésett.</s>
<|assistant|>
Anonim felkelt, megitta reggeli kávéját, elköszönt feleségétől, Anonimtól, és beült autójába. Késésben volt, ezért gyorsabban vezzett a megengedettnél, aminek következtében Anonim ezredes megállította. Végül egy bírsággal megúszta, de a munkából már bőven elkésett.
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["hu", "en"], "license": "llama2", "datasets": ["HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized", "HuggingFaceH4/cai-conversation-harmless"]}
|
sambanovasystems/SambaLingo-Hungarian-Chat-70B
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"hu",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:06:31+00:00
|
[
"2404.05829"
] |
[
"hu",
"en"
] |
TAGS
#transformers #pytorch #llama #text-generation #conversational #hu #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #dataset-HuggingFaceH4/cai-conversation-harmless #arxiv-2404.05829 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SambaLingo-Hungarian-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
SambaLingo-Hungarian-Chat-70B is a human aligned chat model trained in Hungarian and English. It is trained using direct preference optimization on top the base model SambaLingo-Hungarian-Base. The base model adapts Llama-2-70b to Hungarian by training on 19 billion tokens from the Hungarian split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.
## Model Description
- Developed by: SambaNova Systems
- Model type: Language Model
- Language(s): Hungarian, English
- Finetuned from model: Llama-2-70b
- Paper: SambaLingo: Teaching Large Language Models New Languages
- Blog Post: sambalingo-open-source-language-experts
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
### Example Prompts and Generations
## Training Details
The alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages
## Uses
### Direct Use
Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
|
[
"# SambaLingo-Hungarian-Chat-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Hungarian-Chat-70B is a human aligned chat model trained in Hungarian and English. It is trained using direct preference optimization on top the base model SambaLingo-Hungarian-Base. The base model adapts Llama-2-70b to Hungarian by training on 19 billion tokens from the Hungarian split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Hungarian, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Interacting With Model Pipeline\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Suggested Inference Parameters\n- Temperature: 0.8\n- Repetition penalty: 1.0\n- Top-p: 0.9",
"### Prompting Guidelines\nTo prompt this model, please use the following chat template:",
"### Example Prompts and Generations",
"## Training Details\nThe alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).\n\nThe SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.\n\nThe DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #conversational #hu #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #dataset-HuggingFaceH4/cai-conversation-harmless #arxiv-2404.05829 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SambaLingo-Hungarian-Chat-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Hungarian-Chat-70B is a human aligned chat model trained in Hungarian and English. It is trained using direct preference optimization on top the base model SambaLingo-Hungarian-Base. The base model adapts Llama-2-70b to Hungarian by training on 19 billion tokens from the Hungarian split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Hungarian, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Interacting With Model Pipeline\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Suggested Inference Parameters\n- Temperature: 0.8\n- Repetition penalty: 1.0\n- Top-p: 0.9",
"### Prompting Guidelines\nTo prompt this model, please use the following chat template:",
"### Example Prompts and Generations",
"## Training Details\nThe alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).\n\nThe SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.\n\nThe DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
text-generation
|
transformers
|
# SambaLingo-Thai-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model [SambaLingo-Thai-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Thai, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
ประเทศไทยช่วงเช้าเคารพธงชาติเมื่อไร</s>
<|assistant|>
ในประเทศไทย เวลาเคารพธงชาติคือเวลา 08.00 น. และ 18.00 น. ทุกวัน ประชาชนจะยืนตรงและร้องเพลงชาติในช่วงเวลาเหล่านี้เพื่อเป็นสัญลักษณ์ของความรักชาติและความเคารพต่อประเทศ
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["th", "en"], "license": "llama2", "datasets": ["HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized", "HuggingFaceH4/cai-conversation-harmless"]}
|
sambanovasystems/SambaLingo-Thai-Chat-70B
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"th",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:06:57+00:00
|
[
"2404.05829"
] |
[
"th",
"en"
] |
TAGS
#transformers #pytorch #llama #text-generation #conversational #th #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #dataset-HuggingFaceH4/cai-conversation-harmless #arxiv-2404.05829 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SambaLingo-Thai-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
SambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model SambaLingo-Thai-Base-70B. The base model adapts Llama-2-70b to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.
## Model Description
- Developed by: SambaNova Systems
- Model type: Language Model
- Language(s): Thai, English
- Finetuned from model: Llama-2-70b
- Paper: SambaLingo: Teaching Large Language Models New Languages
- Blog Post: sambalingo-open-source-language-experts
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
### Example Prompts and Generations
## Training Details
The alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages
## Uses
### Direct Use
Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
|
[
"# SambaLingo-Thai-Chat-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model SambaLingo-Thai-Base-70B. The base model adapts Llama-2-70b to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Thai, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Interacting With Model Pipeline\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Suggested Inference Parameters\n- Temperature: 0.8\n- Repetition penalty: 1.0\n- Top-p: 0.9",
"### Prompting Guidelines\nTo prompt this model, please use the following chat template:",
"### Example Prompts and Generations",
"## Training Details\nThe alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).\n\nThe SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.\n\nThe DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation \nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #conversational #th #en #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #dataset-HuggingFaceH4/cai-conversation-harmless #arxiv-2404.05829 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SambaLingo-Thai-Chat-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Thai-Chat-70B is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model SambaLingo-Thai-Base-70B. The base model adapts Llama-2-70b to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. Try This Model at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Thai, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Interacting With Model Pipeline\nPlease make sure to set use_fast=False when loading the tokenizer.",
"### Suggested Inference Parameters\n- Temperature: 0.8\n- Repetition penalty: 1.0\n- Top-p: 0.9",
"### Prompting Guidelines\nTo prompt this model, please use the following chat template:",
"### Example Prompts and Generations",
"## Training Details\nThe alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).\n\nThe SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.\n\nThe DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation \nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
text-generation
|
transformers
|
# SambaLingo-Arabic-Base-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Arabic-Base-70B is a pretrained Bi-lingual Arabic and English model that adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Arabic by training on 28 billion tokens from the Arabic split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Arabic-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Chat-70B), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Arabic, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Arabic-Base-70B")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Arabic-Base-70B", device_map="auto", torch_dtype="auto")
```
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Arabic-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Chat).
## Training Details
All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["ar", "en"], "license": "llama2", "datasets": ["uonlp/CulturaX"], "metrics": ["chrf", "accuracy", "bleu"]}
|
sambanovasystems/SambaLingo-Arabic-Base-70B
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ar",
"en",
"dataset:uonlp/CulturaX",
"arxiv:2404.05829",
"arxiv:2311.05741",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:07:15+00:00
|
[
"2404.05829",
"2311.05741"
] |
[
"ar",
"en"
] |
TAGS
#transformers #pytorch #llama #text-generation #ar #en #dataset-uonlp/CulturaX #arxiv-2404.05829 #arxiv-2311.05741 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SambaLingo-Arabic-Base-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
SambaLingo-Arabic-Base-70B is a pretrained Bi-lingual Arabic and English model that adapts Llama-2-70b to Arabic by training on 28 billion tokens from the Arabic split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Arabic-Chat, or try it out at SambaLingo-chat-space.
## Model Description
- Developed by: SambaNova Systems
- Model type: Language Model
- Language(s): Arabic, English
- Finetuned from model: Llama-2-70b
- Paper: SambaLingo: Teaching Large Language Models New Languages
- Blog Post: sambalingo-open-source-language-experts
## Getting Started
### Loading Model With Hugging Face
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Arabic-Chat.
## Training Details
All pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages
## Uses
### Direct Use
Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
|
[
"# SambaLingo-Arabic-Base-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Arabic-Base-70B is a pretrained Bi-lingual Arabic and English model that adapts Llama-2-70b to Arabic by training on 28 billion tokens from the Arabic split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Arabic-Chat, or try it out at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Arabic, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face",
"### Suggested Inference Parameters\nWe suggest setting do_sample=False as this is a pretrained checkpoint.",
"### Prompting Guidelines\nThis model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \\<s\\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Arabic-Chat.",
"## Training Details\nAll pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #ar #en #dataset-uonlp/CulturaX #arxiv-2404.05829 #arxiv-2311.05741 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SambaLingo-Arabic-Base-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Arabic-Base-70B is a pretrained Bi-lingual Arabic and English model that adapts Llama-2-70b to Arabic by training on 28 billion tokens from the Arabic split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Arabic-Chat, or try it out at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Arabic, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face",
"### Suggested Inference Parameters\nWe suggest setting do_sample=False as this is a pretrained checkpoint.",
"### Prompting Guidelines\nThis model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \\<s\\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Arabic-Chat.",
"## Training Details\nAll pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
text-generation
|
transformers
|
# SambaLingo-Hungarian-Base-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Hungarian-Base-70B is a pretrained Bi-lingual Hungarian and English model that adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Hungarian by training on 19 billion tokens from the Hungarian split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Hungarian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Chat), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Hungarian, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper**: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Base-70B")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Base-70B", device_map="auto", torch_dtype="auto")
```
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Hungarian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Chat).
## Training Details
All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["hu", "en"], "license": "llama2", "datasets": ["uonlp/CulturaX"], "metrics": ["chrf", "accuracy", "bleu"]}
|
sambanovasystems/SambaLingo-Hungarian-Base-70B
| null |
[
"transformers",
"pytorch",
"llama",
"text-generation",
"hu",
"en",
"dataset:uonlp/CulturaX",
"arxiv:2404.05829",
"arxiv:2311.05741",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:07:31+00:00
|
[
"2404.05829",
"2311.05741"
] |
[
"hu",
"en"
] |
TAGS
#transformers #pytorch #llama #text-generation #hu #en #dataset-uonlp/CulturaX #arxiv-2404.05829 #arxiv-2311.05741 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SambaLingo-Hungarian-Base-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
SambaLingo-Hungarian-Base-70B is a pretrained Bi-lingual Hungarian and English model that adapts Llama-2-70b to Hungarian by training on 19 billion tokens from the Hungarian split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Hungarian-Chat, or try it out at SambaLingo-chat-space
## Model Description
- Developed by: SambaNova Systems
- Model type: Language Model
- Language(s): Hungarian, English
- Finetuned from model: Llama-2-70b
- Paper: SambaLingo: Teaching Large Language Models New Languages
- Blog Post: sambalingo-open-source-language-experts
## Getting Started
### Loading Model With Hugging Face
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Hungarian-Chat.
## Training Details
All pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages
## Uses
### Direct Use
Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
|
[
"# SambaLingo-Hungarian-Base-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Hungarian-Base-70B is a pretrained Bi-lingual Hungarian and English model that adapts Llama-2-70b to Hungarian by training on 19 billion tokens from the Hungarian split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Hungarian-Chat, or try it out at SambaLingo-chat-space",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Hungarian, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face",
"### Suggested Inference Parameters\nWe suggest setting do_sample=False as this is a pretrained checkpoint.",
"### Prompting Guidelines\nThis model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \\<s\\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Hungarian-Chat.",
"## Training Details\nAll pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
[
"TAGS\n#transformers #pytorch #llama #text-generation #hu #en #dataset-uonlp/CulturaX #arxiv-2404.05829 #arxiv-2311.05741 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SambaLingo-Hungarian-Base-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Hungarian-Base-70B is a pretrained Bi-lingual Hungarian and English model that adapts Llama-2-70b to Hungarian by training on 19 billion tokens from the Hungarian split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Hungarian-Chat, or try it out at SambaLingo-chat-space",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Hungarian, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face",
"### Suggested Inference Parameters\nWe suggest setting do_sample=False as this is a pretrained checkpoint.",
"### Prompting Guidelines\nThis model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \\<s\\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Hungarian-Chat.",
"## Training Details\nAll pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
text-generation
|
transformers
|
# SambaLingo-Thai-Base-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Thai-Base-70B is a pretrained Bi-lingual Thai and English model that adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Thai by training on 26 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Thai-Chat-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Thai, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Base-70B")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Base-70B", device_map="auto", torch_dtype="auto")
```
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Thai-Chat-70B](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat-70B).
## Training Details
All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["th", "en"], "license": "llama2", "datasets": ["uonlp/CulturaX"], "metrics": ["chrf", "accuracy", "bleu"]}
|
sambanovasystems/SambaLingo-Thai-Base-70B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"th",
"en",
"dataset:uonlp/CulturaX",
"arxiv:2404.05829",
"arxiv:2311.05741",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:07:50+00:00
|
[
"2404.05829",
"2311.05741"
] |
[
"th",
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #th #en #dataset-uonlp/CulturaX #arxiv-2404.05829 #arxiv-2311.05741 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SambaLingo-Thai-Base-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
SambaLingo-Thai-Base-70B is a pretrained Bi-lingual Thai and English model that adapts Llama-2-70b to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Thai-Chat-70B, or try it out at SambaLingo-chat-space.
## Model Description
- Developed by: SambaNova Systems
- Model type: Language Model
- Language(s): Thai, English
- Finetuned from model: Llama-2-70b
- Paper: SambaLingo: Teaching Large Language Models New Languages
- Blog Post: sambalingo-open-source-language-experts
## Getting Started
### Loading Model With Hugging Face
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Thai-Chat-70B.
## Training Details
All pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages
## Uses
### Direct Use
Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
|
[
"# SambaLingo-Thai-Base-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Thai-Base-70B is a pretrained Bi-lingual Thai and English model that adapts Llama-2-70b to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Thai-Chat-70B, or try it out at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Thai, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face",
"### Suggested Inference Parameters\nWe suggest setting do_sample=False as this is a pretrained checkpoint.",
"### Prompting Guidelines\nThis model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \\<s\\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Thai-Chat-70B.",
"## Training Details\nAll pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #th #en #dataset-uonlp/CulturaX #arxiv-2404.05829 #arxiv-2311.05741 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SambaLingo-Thai-Base-70B\n\n<img src=\"SambaLingo_Logo.png\" width=\"340\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nSambaLingo-Thai-Base-70B is a pretrained Bi-lingual Thai and English model that adapts Llama-2-70b to Thai by training on 26 billion tokens from the Thai split of the Cultura-X dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see sambanovasystems/SambaLingo-Thai-Chat-70B, or try it out at SambaLingo-chat-space.",
"## Model Description\n\n\n- Developed by: SambaNova Systems\n- Model type: Language Model\n- Language(s): Thai, English\n- Finetuned from model: Llama-2-70b\n- Paper: SambaLingo: Teaching Large Language Models New Languages\n- Blog Post: sambalingo-open-source-language-experts",
"## Getting Started",
"### Loading Model With Hugging Face",
"### Suggested Inference Parameters\nWe suggest setting do_sample=False as this is a pretrained checkpoint.",
"### Prompting Guidelines\nThis model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \\<s\\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences sambanovasystems/SambaLingo-Thai-Chat-70B.",
"## Training Details\nAll pre-training is done on the Cultura-X dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by Csaki et al. We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.",
"## Tokenizer Details\nWe extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.",
"## Evaluation\nFor evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages",
"## Uses",
"### Direct Use\n\n\nUse of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.",
"### Out-of-Scope Use\n\n\nSambaLingo should NOT be used for:\n\n- Mission-critical applications\n- Applications that involve the safety of others\n- Making highly important decisions",
"## Bias, Risks, and Limitations\n\n\n\nLike all LLMs, SambaLingo has certain limitations:\n- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.\n- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.\n- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.\n- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.\n- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.",
"## Acknowledgments\nWe extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.\n\nWe would like to give a special thanks to the following groups:\n- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset\n- Nguyen et al for open sourcing CulturaX dataset\n- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset\n- EleutherAI for their open source evaluation framework\n- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo",
"## Cite SambaLingo"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Weblet/phi-1.5-turbo1
| null |
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:10:10+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Novin-AI/Rava-3x7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q2_K.gguf) | Q2_K | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ3_M.gguf) | IQ3_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 13.2 | |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q6_K.gguf) | Q6_K | 15.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Rava-3x7B-v0.1-GGUF/resolve/main/Rava-3x7B-v0.1.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "Novin-AI/Rava-3x7B-v0.1", "quantized_by": "mradermacher"}
|
mradermacher/Rava-3x7B-v0.1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:Novin-AI/Rava-3x7B-v0.1",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T20:14:16+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-Novin-AI/Rava-3x7B-v0.1 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-Novin-AI/Rava-3x7B-v0.1 #endpoints_compatible #region-us \n"
] |
video-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivit-b-16x2-collected-dataset
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2578
- Accuracy: 0.9610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 14020
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1001 | 0.1 | 1403 | 0.8989 | 0.7789 |
| 0.2646 | 1.1 | 2806 | 0.5655 | 0.8857 |
| 0.0785 | 2.1 | 4209 | 0.4806 | 0.9053 |
| 0.0001 | 3.1 | 5612 | 0.3706 | 0.9398 |
| 0.054 | 4.1 | 7015 | 0.4007 | 0.9368 |
| 0.0003 | 5.1 | 8418 | 0.2354 | 0.9669 |
| 0.0001 | 6.1 | 9821 | 0.3900 | 0.9474 |
| 0.0003 | 7.1 | 11224 | 0.2667 | 0.9579 |
| 0.0001 | 8.1 | 12627 | 0.2436 | 0.9654 |
| 0.0 | 9.1 | 14020 | 0.2432 | 0.9654 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vivit-b-16x2-kinetics400", "model-index": [{"name": "vivit-b-16x2-collected-dataset", "results": []}]}
|
yehiawp4/vivit-b-16x2-collected-dataset
| null |
[
"transformers",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T20:16:08+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #vivit #video-classification #generated_from_trainer #base_model-google/vivit-b-16x2-kinetics400 #license-mit #endpoints_compatible #region-us
|
vivit-b-16x2-collected-dataset
==============================
This model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2578
* Accuracy: 0.9610
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 14020
### Training results
### Framework versions
* Transformers 4.39.0
* Pytorch 2.1.0
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 14020",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.1.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #vivit #video-classification #generated_from_trainer #base_model-google/vivit-b-16x2-kinetics400 #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 14020",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0\n* Pytorch 2.1.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Kukedlc/Neural-4-QA-7b](https://huggingface.co/Kukedlc/Neural-4-QA-7b)
* [allknowingroger/NeuralCeptrix-7B-slerp](https://huggingface.co/allknowingroger/NeuralCeptrix-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: allknowingroger/NeuralCeptrix-7B-slerp
layer_range: [0, 32]
- model: Kukedlc/Neural-4-QA-7b
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/NeuralCeptrix-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Kukedlc/Neural-4-QA-7b", "allknowingroger/NeuralCeptrix-7B-slerp"]}
|
Kukedlc/Neural-4-QA-7b-v0.2
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Kukedlc/Neural-4-QA-7b",
"base_model:allknowingroger/NeuralCeptrix-7B-slerp",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:17:19+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #base_model-Kukedlc/Neural-4-QA-7b #base_model-allknowingroger/NeuralCeptrix-7B-slerp #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Kukedlc/Neural-4-QA-7b
* allknowingroger/NeuralCeptrix-7B-slerp
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Kukedlc/Neural-4-QA-7b\n* allknowingroger/NeuralCeptrix-7B-slerp",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #base_model-Kukedlc/Neural-4-QA-7b #base_model-allknowingroger/NeuralCeptrix-7B-slerp #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* Kukedlc/Neural-4-QA-7b\n* allknowingroger/NeuralCeptrix-7B-slerp",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-question-answer-summarization
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1420
- Rouge1: 87.2659
- Rouge2: 79.1621
- Rougel: 84.0716
- Rougelsum: 84.0332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.3593 | 1.0 | 450 | 0.1339 | 87.0068 | 78.4882 | 83.5134 | 83.4528 |
| 0.121 | 2.0 | 900 | 0.1273 | 87.3363 | 79.1644 | 83.7472 | 83.7456 |
| 0.0982 | 3.0 | 1350 | 0.1314 | 87.0066 | 78.3475 | 83.0262 | 82.9739 |
| 0.084 | 4.0 | 1800 | 0.1322 | 87.1678 | 78.7514 | 83.4642 | 83.441 |
| 0.074 | 5.0 | 2250 | 0.1345 | 87.2618 | 79.114 | 83.9859 | 83.9444 |
| 0.0685 | 6.0 | 2700 | 0.1378 | 87.1497 | 79.0628 | 83.958 | 83.9482 |
| 0.0609 | 7.0 | 3150 | 0.1419 | 86.993 | 78.781 | 83.8076 | 83.7681 |
| 0.0591 | 8.0 | 3600 | 0.1420 | 87.2659 | 79.1621 | 84.0716 | 84.0332 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "base_model": "google-t5/t5-base", "model-index": [{"name": "t5-base-question-answer-summarization", "results": []}]}
|
JohnDoe70/t5-summarization
| null |
[
"transformers",
"tensorboard",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:17:38+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #onnx #safetensors #t5 #text2text-generation #summarization #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-base-question-answer-summarization
=====================================
This model is a fine-tuned version of google-t5/t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1420
* Rouge1: 87.2659
* Rouge2: 79.1621
* Rougel: 84.0716
* Rougelsum: 84.0332
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #onnx #safetensors #t5 #text2text-generation #summarization #generated_from_trainer #base_model-google-t5/t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
sentence-similarity
|
sentence-transformers
|
# Yunika/muril-base-sentence-transformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Yunika/muril-base-sentence-transformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('Yunika/muril-base-sentence-transformer')
model = AutoModel.from_pretrained('Yunika/muril-base-sentence-transformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Yunika/muril-base-sentence-transformer)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3181 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 8,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 25448,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["embedding-data/QQP_triplets"], "pipeline_tag": "sentence-similarity"}
|
Yunika/muril-base-sentence-transformer
| null |
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:embedding-data/QQP_triplets",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T20:19:44+00:00
|
[] |
[] |
TAGS
#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-embedding-data/QQP_triplets #endpoints_compatible #region-us
|
# Yunika/muril-base-sentence-transformer
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 3181 with parameters:
Loss:
'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# Yunika/muril-base-sentence-transformer\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 3181 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #dataset-embedding-data/QQP_triplets #endpoints_compatible #region-us \n",
"# Yunika/muril-base-sentence-transformer\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 3181 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ashwinradhe/m2m_fb
| null |
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T20:20:21+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #m2m_100 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #m2m_100 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
devbuzz142/cp05-finetune-gpt2-ALL-NNN4-split-13epoch
| null |
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:20:22+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
adapter-transformers
|
# Adapter `BigTMiami/pretrain_tapt_seq_bn_adpater` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/pretrain_tapt_seq_bn_adpater", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["adapter-transformers", "roberta"], "datasets": ["BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset"]}
|
BigTMiami/pretrain_tapt_seq_bn_adpater
| null |
[
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset",
"region:us"
] | null |
2024-04-15T20:21:43+00:00
|
[] |
[] |
TAGS
#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset #region-us
|
# Adapter 'BigTMiami/pretrain_tapt_seq_bn_adpater' for roberta-base
An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset dataset and includes a prediction head for masked lm.
This adapter was created for usage with the Adapters library.
## Usage
First, install 'adapters':
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'BigTMiami/pretrain_tapt_seq_bn_adpater' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset #region-us \n",
"# Adapter 'BigTMiami/pretrain_tapt_seq_bn_adpater' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.",
"## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4260
- F1 Score: 0.8426
- Accuracy: 0.8426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5397 | 8.33 | 200 | 0.4533 | 0.7925 | 0.7934 |
| 0.4299 | 16.67 | 400 | 0.4126 | 0.8125 | 0.8127 |
| 0.3758 | 25.0 | 600 | 0.3961 | 0.8269 | 0.8274 |
| 0.3406 | 33.33 | 800 | 0.3730 | 0.8408 | 0.8409 |
| 0.3136 | 41.67 | 1000 | 0.3854 | 0.8327 | 0.8328 |
| 0.2961 | 50.0 | 1200 | 0.3897 | 0.8386 | 0.8387 |
| 0.2767 | 58.33 | 1400 | 0.3795 | 0.8427 | 0.8427 |
| 0.2603 | 66.67 | 1600 | 0.3889 | 0.8466 | 0.8466 |
| 0.2453 | 75.0 | 1800 | 0.3807 | 0.8463 | 0.8463 |
| 0.2335 | 83.33 | 2000 | 0.3953 | 0.8490 | 0.8490 |
| 0.2213 | 91.67 | 2200 | 0.4068 | 0.8472 | 0.8473 |
| 0.2091 | 100.0 | 2400 | 0.4050 | 0.8468 | 0.8468 |
| 0.2 | 108.33 | 2600 | 0.4403 | 0.8463 | 0.8463 |
| 0.1926 | 116.67 | 2800 | 0.4278 | 0.8483 | 0.8483 |
| 0.183 | 125.0 | 3000 | 0.4306 | 0.8466 | 0.8466 |
| 0.1748 | 133.33 | 3200 | 0.4506 | 0.8427 | 0.8427 |
| 0.1689 | 141.67 | 3400 | 0.4609 | 0.8471 | 0.8471 |
| 0.161 | 150.0 | 3600 | 0.4947 | 0.8472 | 0.8473 |
| 0.1563 | 158.33 | 3800 | 0.4769 | 0.8476 | 0.8476 |
| 0.1521 | 166.67 | 4000 | 0.4817 | 0.8406 | 0.8407 |
| 0.1446 | 175.0 | 4200 | 0.4940 | 0.8424 | 0.8426 |
| 0.1406 | 183.33 | 4400 | 0.4975 | 0.8435 | 0.8436 |
| 0.1344 | 191.67 | 4600 | 0.5170 | 0.8422 | 0.8422 |
| 0.1308 | 200.0 | 4800 | 0.5116 | 0.8440 | 0.8443 |
| 0.1275 | 208.33 | 5000 | 0.5406 | 0.8400 | 0.8404 |
| 0.1226 | 216.67 | 5200 | 0.5376 | 0.8415 | 0.8417 |
| 0.1209 | 225.0 | 5400 | 0.5134 | 0.8464 | 0.8465 |
| 0.118 | 233.33 | 5600 | 0.5361 | 0.8413 | 0.8414 |
| 0.1148 | 241.67 | 5800 | 0.5350 | 0.8430 | 0.8431 |
| 0.1122 | 250.0 | 6000 | 0.5473 | 0.8419 | 0.8421 |
| 0.1084 | 258.33 | 6200 | 0.5554 | 0.8401 | 0.8402 |
| 0.106 | 266.67 | 6400 | 0.5645 | 0.8415 | 0.8417 |
| 0.1041 | 275.0 | 6600 | 0.5328 | 0.8463 | 0.8463 |
| 0.1023 | 283.33 | 6800 | 0.5634 | 0.8426 | 0.8427 |
| 0.0985 | 291.67 | 7000 | 0.5636 | 0.8437 | 0.8438 |
| 0.0979 | 300.0 | 7200 | 0.5576 | 0.8408 | 0.8409 |
| 0.0959 | 308.33 | 7400 | 0.5664 | 0.8450 | 0.8451 |
| 0.0953 | 316.67 | 7600 | 0.5901 | 0.8420 | 0.8422 |
| 0.0928 | 325.0 | 7800 | 0.5799 | 0.8435 | 0.8436 |
| 0.0915 | 333.33 | 8000 | 0.5879 | 0.8407 | 0.8409 |
| 0.0891 | 341.67 | 8200 | 0.5827 | 0.8443 | 0.8444 |
| 0.0884 | 350.0 | 8400 | 0.6022 | 0.8383 | 0.8385 |
| 0.0872 | 358.33 | 8600 | 0.6098 | 0.8434 | 0.8436 |
| 0.085 | 366.67 | 8800 | 0.6013 | 0.8433 | 0.8434 |
| 0.0846 | 375.0 | 9000 | 0.5821 | 0.8445 | 0.8446 |
| 0.0836 | 383.33 | 9200 | 0.5978 | 0.8417 | 0.8419 |
| 0.0834 | 391.67 | 9400 | 0.5932 | 0.8444 | 0.8444 |
| 0.0827 | 400.0 | 9600 | 0.6029 | 0.8421 | 0.8422 |
| 0.081 | 408.33 | 9800 | 0.6010 | 0.8423 | 0.8424 |
| 0.0824 | 416.67 | 10000 | 0.6020 | 0.8419 | 0.8421 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T20:23:24+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_46M-L32\_all
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4260
* F1 Score: 0.8426
* Accuracy: 0.8426
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7752
- F1 Score: 0.6441
- Accuracy: 0.6444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5967 | 50.0 | 200 | 0.7536 | 0.6158 | 0.6185 |
| 0.3823 | 100.0 | 400 | 1.0065 | 0.6189 | 0.6235 |
| 0.2617 | 150.0 | 600 | 1.1147 | 0.6191 | 0.6198 |
| 0.1838 | 200.0 | 800 | 1.4037 | 0.6069 | 0.6074 |
| 0.1425 | 250.0 | 1000 | 1.5990 | 0.6099 | 0.6099 |
| 0.1172 | 300.0 | 1200 | 1.5709 | 0.6145 | 0.6148 |
| 0.0957 | 350.0 | 1400 | 1.6971 | 0.6161 | 0.6160 |
| 0.0822 | 400.0 | 1600 | 1.8275 | 0.6171 | 0.6173 |
| 0.0716 | 450.0 | 1800 | 1.8359 | 0.6294 | 0.6309 |
| 0.0624 | 500.0 | 2000 | 1.8058 | 0.6248 | 0.6259 |
| 0.0573 | 550.0 | 2200 | 1.8866 | 0.6169 | 0.6198 |
| 0.0517 | 600.0 | 2400 | 2.0152 | 0.6093 | 0.6099 |
| 0.047 | 650.0 | 2600 | 1.7649 | 0.6143 | 0.6173 |
| 0.0434 | 700.0 | 2800 | 2.0995 | 0.6178 | 0.6210 |
| 0.0383 | 750.0 | 3000 | 1.9797 | 0.6233 | 0.6235 |
| 0.0361 | 800.0 | 3200 | 2.0422 | 0.6207 | 0.6210 |
| 0.0341 | 850.0 | 3400 | 2.0673 | 0.6156 | 0.6160 |
| 0.0303 | 900.0 | 3600 | 2.1903 | 0.6223 | 0.6222 |
| 0.0291 | 950.0 | 3800 | 2.1579 | 0.6186 | 0.6185 |
| 0.0281 | 1000.0 | 4000 | 2.2313 | 0.6197 | 0.6198 |
| 0.0266 | 1050.0 | 4200 | 2.1570 | 0.6173 | 0.6173 |
| 0.0237 | 1100.0 | 4400 | 2.1183 | 0.6148 | 0.6148 |
| 0.0253 | 1150.0 | 4600 | 1.9395 | 0.6187 | 0.6198 |
| 0.0224 | 1200.0 | 4800 | 2.2272 | 0.6205 | 0.6210 |
| 0.0213 | 1250.0 | 5000 | 2.2636 | 0.6219 | 0.6222 |
| 0.0214 | 1300.0 | 5200 | 2.0081 | 0.6223 | 0.6222 |
| 0.0199 | 1350.0 | 5400 | 2.1998 | 0.6144 | 0.6148 |
| 0.0181 | 1400.0 | 5600 | 2.2357 | 0.6222 | 0.6222 |
| 0.0178 | 1450.0 | 5800 | 2.2654 | 0.6147 | 0.6148 |
| 0.0189 | 1500.0 | 6000 | 2.1997 | 0.6123 | 0.6123 |
| 0.0177 | 1550.0 | 6200 | 2.0925 | 0.6185 | 0.6185 |
| 0.0162 | 1600.0 | 6400 | 2.2021 | 0.6214 | 0.6222 |
| 0.0148 | 1650.0 | 6600 | 2.3634 | 0.6185 | 0.6198 |
| 0.0141 | 1700.0 | 6800 | 2.2453 | 0.6260 | 0.6259 |
| 0.0134 | 1750.0 | 7000 | 2.2202 | 0.6247 | 0.6247 |
| 0.0136 | 1800.0 | 7200 | 2.2105 | 0.6112 | 0.6111 |
| 0.0127 | 1850.0 | 7400 | 2.3350 | 0.6186 | 0.6185 |
| 0.0123 | 1900.0 | 7600 | 2.3961 | 0.6135 | 0.6136 |
| 0.0117 | 1950.0 | 7800 | 2.4216 | 0.6185 | 0.6185 |
| 0.0121 | 2000.0 | 8000 | 2.3294 | 0.6099 | 0.6099 |
| 0.0117 | 2050.0 | 8200 | 2.3180 | 0.6068 | 0.6074 |
| 0.0116 | 2100.0 | 8400 | 2.2751 | 0.6161 | 0.6160 |
| 0.0106 | 2150.0 | 8600 | 2.3885 | 0.6062 | 0.6062 |
| 0.0103 | 2200.0 | 8800 | 2.3912 | 0.6062 | 0.6062 |
| 0.0105 | 2250.0 | 9000 | 2.3589 | 0.6134 | 0.6136 |
| 0.0107 | 2300.0 | 9200 | 2.3820 | 0.6151 | 0.6160 |
| 0.0096 | 2350.0 | 9400 | 2.4459 | 0.6124 | 0.6123 |
| 0.0099 | 2400.0 | 9600 | 2.4171 | 0.6148 | 0.6148 |
| 0.0087 | 2450.0 | 9800 | 2.4305 | 0.6122 | 0.6123 |
| 0.0089 | 2500.0 | 10000 | 2.4277 | 0.6111 | 0.6111 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_mouse_0-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_0-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T20:31:05+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_mouse\_0-seqsight\_4096\_512\_46M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7752
* F1 Score: 0.6441
* Accuracy: 0.6444
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Sappho_V0.0.3
Sappho_V0.0.3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: HuggingFaceH4/zephyr-7b-beta
parameters:
density: 0.53
weight: 0.90
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Jakolo121/Sappho_V0.0.3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"tags": ["merge", "mergekit", "lazymergekit", "HuggingFaceH4/zephyr-7b-beta"], "base_model": ["HuggingFaceH4/zephyr-7b-beta"]}
|
Jakolo121/Sappho_V0.0.3
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"HuggingFaceH4/zephyr-7b-beta",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:31:21+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #HuggingFaceH4/zephyr-7b-beta #base_model-HuggingFaceH4/zephyr-7b-beta #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Sappho_V0.0.3
Sappho_V0.0.3 is a merge of the following models using LazyMergekit:
* HuggingFaceH4/zephyr-7b-beta
## Configuration
## Usage
|
[
"# Sappho_V0.0.3\n\nSappho_V0.0.3 is a merge of the following models using LazyMergekit:\n* HuggingFaceH4/zephyr-7b-beta",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #HuggingFaceH4/zephyr-7b-beta #base_model-HuggingFaceH4/zephyr-7b-beta #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Sappho_V0.0.3\n\nSappho_V0.0.3 is a merge of the following models using LazyMergekit:\n* HuggingFaceH4/zephyr-7b-beta",
"## Configuration",
"## Usage"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_4096_512_46M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4409
- F1 Score: 0.8219
- Accuracy: 0.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5417 | 7.41 | 200 | 0.4578 | 0.7786 | 0.7798 |
| 0.4493 | 14.81 | 400 | 0.4229 | 0.7954 | 0.7958 |
| 0.4198 | 22.22 | 600 | 0.4168 | 0.8028 | 0.8028 |
| 0.3953 | 29.63 | 800 | 0.4097 | 0.8108 | 0.8108 |
| 0.3703 | 37.04 | 1000 | 0.3869 | 0.8204 | 0.8206 |
| 0.3487 | 44.44 | 1200 | 0.3879 | 0.8247 | 0.8248 |
| 0.3311 | 51.85 | 1400 | 0.3952 | 0.8272 | 0.8273 |
| 0.3143 | 59.26 | 1600 | 0.3963 | 0.8257 | 0.8258 |
| 0.3005 | 66.67 | 1800 | 0.3970 | 0.8280 | 0.8282 |
| 0.2835 | 74.07 | 2000 | 0.4003 | 0.8269 | 0.8270 |
| 0.2679 | 81.48 | 2200 | 0.4161 | 0.8253 | 0.8256 |
| 0.2545 | 88.89 | 2400 | 0.4224 | 0.8253 | 0.8254 |
| 0.2401 | 96.3 | 2600 | 0.4314 | 0.8297 | 0.8298 |
| 0.2273 | 103.7 | 2800 | 0.4364 | 0.8269 | 0.8270 |
| 0.2159 | 111.11 | 3000 | 0.4442 | 0.8285 | 0.8285 |
| 0.2034 | 118.52 | 3200 | 0.4489 | 0.8244 | 0.8245 |
| 0.1948 | 125.93 | 3400 | 0.4654 | 0.8283 | 0.8283 |
| 0.184 | 133.33 | 3600 | 0.4763 | 0.8285 | 0.8286 |
| 0.1743 | 140.74 | 3800 | 0.4921 | 0.8252 | 0.8252 |
| 0.1666 | 148.15 | 4000 | 0.5006 | 0.8269 | 0.8270 |
| 0.1607 | 155.56 | 4200 | 0.5131 | 0.8238 | 0.8240 |
| 0.1526 | 162.96 | 4400 | 0.5064 | 0.8221 | 0.8221 |
| 0.1457 | 170.37 | 4600 | 0.5284 | 0.8240 | 0.8240 |
| 0.1407 | 177.78 | 4800 | 0.5394 | 0.8216 | 0.8216 |
| 0.1349 | 185.19 | 5000 | 0.5618 | 0.8244 | 0.8245 |
| 0.1287 | 192.59 | 5200 | 0.5561 | 0.8241 | 0.8242 |
| 0.1242 | 200.0 | 5400 | 0.5577 | 0.8258 | 0.8258 |
| 0.1203 | 207.41 | 5600 | 0.5691 | 0.8262 | 0.8262 |
| 0.116 | 214.81 | 5800 | 0.5750 | 0.8229 | 0.8230 |
| 0.1132 | 222.22 | 6000 | 0.5779 | 0.8262 | 0.8262 |
| 0.1098 | 229.63 | 6200 | 0.5969 | 0.8258 | 0.8261 |
| 0.1058 | 237.04 | 6400 | 0.5952 | 0.8240 | 0.8240 |
| 0.1031 | 244.44 | 6600 | 0.5962 | 0.8263 | 0.8264 |
| 0.1001 | 251.85 | 6800 | 0.6097 | 0.8238 | 0.8239 |
| 0.0973 | 259.26 | 7000 | 0.5976 | 0.8247 | 0.8248 |
| 0.0955 | 266.67 | 7200 | 0.6174 | 0.8242 | 0.8243 |
| 0.0927 | 274.07 | 7400 | 0.6114 | 0.8246 | 0.8246 |
| 0.0912 | 281.48 | 7600 | 0.6229 | 0.8258 | 0.8258 |
| 0.0894 | 288.89 | 7800 | 0.6224 | 0.8275 | 0.8276 |
| 0.0875 | 296.3 | 8000 | 0.6091 | 0.8245 | 0.8245 |
| 0.0863 | 303.7 | 8200 | 0.6384 | 0.8241 | 0.8242 |
| 0.0846 | 311.11 | 8400 | 0.6328 | 0.8234 | 0.8234 |
| 0.0839 | 318.52 | 8600 | 0.6408 | 0.8264 | 0.8265 |
| 0.0821 | 325.93 | 8800 | 0.6338 | 0.8256 | 0.8256 |
| 0.0807 | 333.33 | 9000 | 0.6351 | 0.8250 | 0.8251 |
| 0.0796 | 340.74 | 9200 | 0.6359 | 0.8243 | 0.8243 |
| 0.0785 | 348.15 | 9400 | 0.6495 | 0.8262 | 0.8262 |
| 0.0788 | 355.56 | 9600 | 0.6464 | 0.8253 | 0.8254 |
| 0.0788 | 362.96 | 9800 | 0.6432 | 0.8269 | 0.8270 |
| 0.0775 | 370.37 | 10000 | 0.6445 | 0.8274 | 0.8274 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_mouse_1-seqsight_4096_512_46M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_1-seqsight_4096_512_46M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null |
2024-04-15T20:31:36+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
|
GUE\_mouse\_1-seqsight\_4096\_512\_46M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4409
* F1 Score: 0.8219
* Accuracy: 0.8221
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
BohdanPetryshyn/codellama-7b-openapi-completion-merged
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:32:27+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# ECE-TW3-JRGL-VHF6
ECE-TW3-JRGL-VHF6 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "MTSAIR/MultiVerse_70B", "abacusai/Smaug-72B-v0.1"]}
|
IAFrance/ECE-TW3-JRGL-VHF6
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:35:51+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #abacusai/Smaug-72B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ECE-TW3-JRGL-VHF6
ECE-TW3-JRGL-VHF6 is a merge of the following models using mergekit:
* MTSAIR/MultiVerse_70B
* abacusai/Smaug-72B-v0.1
## Configuration
|
[
"# ECE-TW3-JRGL-VHF6\n\nECE-TW3-JRGL-VHF6 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* abacusai/Smaug-72B-v0.1",
"## Configuration"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #abacusai/Smaug-72B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ECE-TW3-JRGL-VHF6\n\nECE-TW3-JRGL-VHF6 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* abacusai/Smaug-72B-v0.1",
"## Configuration"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
achen2001/Blip2_Lora
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-15T20:36:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dragonflymoss/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
|
dragonflymoss/q-FrozenLake-v1-4x4-noSlippery
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-15T20:37:07+00:00
|
[] |
[] |
TAGS
#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 FrozenLake-v1
This is a trained model of a Q-Learning agent playing FrozenLake-v1 .
## Usage
|
[
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
[
"TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage"
] |
text-generation
|
transformers
|
# ECE-TW3-JRGL-VHF4
ECE-TW3-JRGL-VHF4 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "MTSAIR/MultiVerse_70B", "abacusai/Smaug-72B-v0.1"]}
|
IAFrance/ECE-TW3-JRGL-VHF4
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MTSAIR/MultiVerse_70B",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:38:35+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #abacusai/Smaug-72B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ECE-TW3-JRGL-VHF4
ECE-TW3-JRGL-VHF4 is a merge of the following models using mergekit:
* MTSAIR/MultiVerse_70B
* abacusai/Smaug-72B-v0.1
## Configuration
|
[
"# ECE-TW3-JRGL-VHF4\n\nECE-TW3-JRGL-VHF4 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* abacusai/Smaug-72B-v0.1",
"## Configuration"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #MTSAIR/MultiVerse_70B #abacusai/Smaug-72B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ECE-TW3-JRGL-VHF4\n\nECE-TW3-JRGL-VHF4 is a merge of the following models using mergekit:\n* MTSAIR/MultiVerse_70B\n* abacusai/Smaug-72B-v0.1",
"## Configuration"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FineTunedModelTest
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "FineTunedModelTest", "results": []}]}
|
Nada81/FineTunedModelTest
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-15T20:39:16+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# FineTunedModelTest
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# FineTunedModelTest\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-classification #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# FineTunedModelTest\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.