pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
sentence-similarity
sentence-transformers
# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20') model = AutoModel.from_pretrained('DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 110 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 20, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 220.0, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2024-04-29T03:40:07+00:00
[]
[]
TAGS #sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20 This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 110 with parameters: Loss: 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 110 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC20\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 110 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/ft788j9
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T03:41:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GIT_inf_w_caption_blur_ep5 This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/git-base", "model-index": [{"name": "GIT_inf_w_caption_blur_ep5", "results": []}]}
vishwa27/GIT_inf_w_caption_blur_ep5
null
[ "transformers", "safetensors", "git", "text-generation", "generated_from_trainer", "base_model:microsoft/git-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T03:45:43+00:00
[]
[]
TAGS #transformers #safetensors #git #text-generation #generated_from_trainer #base_model-microsoft/git-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
# GIT_inf_w_caption_blur_ep5 This model is a fine-tuned version of microsoft/git-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
[ "# GIT_inf_w_caption_blur_ep5\n\nThis model is a fine-tuned version of microsoft/git-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #git #text-generation #generated_from_trainer #base_model-microsoft/git-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# GIT_inf_w_caption_blur_ep5\n\nThis model is a fine-tuned version of microsoft/git-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/09m6mmy
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T03:48:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/pmxyqjx
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T03:49:16+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-2k-search-50-0.003
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T03:55:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# Sub-path Linear Approximation Model (SLAM): SD1.5 Paper: [https://arxiv.org/abs/2404.13903](https://arxiv.org/abs/2404.13903)</br> Project Page: [https://subpath-linear-approx-model.github.io/](https://subpath-linear-approx-model.github.io/)</br> The checkpoint is a distilled from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. ```bash pip install --upgrade pip pip install --upgrade diffusers transformers accelerate peft ``` We implement SLAM to be compatible with [LCMScheduler](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler). You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly. ```python from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("alimama-creative/slam-sd1.5") # To save GPU memory, torch.float16 can be used, but it may compromise image quality. pipe.to(torch_device="cuda", torch_dtype=torch.float16) prompt = "a painting of a majestic kingdom with towering castles, lush gardens, ice and snow world" num_inference_steps = 2 images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=1, lcm_origin_steps=50, output_type="pil").images ``` ![castle2_slam_step4.png](https://intranetproxy.alipay.com/skylark/lark/0/2024/png/102756509/1714305791356-5ba636a5-8435-4c90-84f3-f06163ebab51.png#clientId=uaea4a13b-3c46-4&from=ui&height=389&id=ue56980e1&originHeight=512&originWidth=512&originalType=binary&ratio=2&rotation=0&showTitle=false&size=518096&status=done&style=none&taskId=ue11b546c-420c-4d47-b87e-13084b19902&title=&width=389)
{"license": "apache-2.0", "library_name": "diffusers", "tags": ["text-to-image"], "inference": false}
alimama-creative/slam-sd1.5
null
[ "diffusers", "text-to-image", "arxiv:2404.13903", "license:apache-2.0", "region:us" ]
null
2024-04-29T03:55:51+00:00
[ "2404.13903" ]
[]
TAGS #diffusers #text-to-image #arxiv-2404.13903 #license-apache-2.0 #region-us
# Sub-path Linear Approximation Model (SLAM): SD1.5 Paper: URL Project Page: URL The checkpoint is a distilled from runwayml/stable-diffusion-v1-5 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. We implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly. !castle2_slam_step4.png
[ "# Sub-path Linear Approximation Model (SLAM): SD1.5\nPaper: URL\nProject Page: URL\nThe checkpoint is a distilled from runwayml/stable-diffusion-v1-5 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.", "## Usage\nFirst, install the latest version of the Diffusers library as well as peft, accelerate and transformers.\n\n\nWe implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.\n\n!castle2_slam_step4.png" ]
[ "TAGS\n#diffusers #text-to-image #arxiv-2404.13903 #license-apache-2.0 #region-us \n", "# Sub-path Linear Approximation Model (SLAM): SD1.5\nPaper: URL\nProject Page: URL\nThe checkpoint is a distilled from runwayml/stable-diffusion-v1-5 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.", "## Usage\nFirst, install the latest version of the Diffusers library as well as peft, accelerate and transformers.\n\n\nWe implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.\n\n!castle2_slam_step4.png" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # job_postings_mlm_model_500k This model is a fine-tuned version of [giyoung-kwon-0902/job_postings_mlm_model_450k](https://huggingface.co/giyoung-kwon-0902/job_postings_mlm_model_450k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1519 | 1.0 | 14307 | 0.1363 | | 0.1242 | 2.0 | 28614 | 0.1135 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "giyoung-kwon-0902/job_postings_mlm_model_450k", "model-index": [{"name": "job_postings_mlm_model_500k", "results": []}]}
giyoung-kwon-0902/job_postings_mlm_model_500k
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:giyoung-kwon-0902/job_postings_mlm_model_450k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T03:56:07+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-giyoung-kwon-0902/job_postings_mlm_model_450k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
job\_postings\_mlm\_model\_500k =============================== This model is a fine-tuned version of giyoung-kwon-0902/job\_postings\_mlm\_model\_450k on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1135 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-giyoung-kwon-0902/job_postings_mlm_model_450k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
fill-mask
transformers
## SPLADE-v3-Lexical SPLADE-v3-Lexical is the SPLADE-Lexical version of `naver/splade-v3` (no expansion on the query side - only term weighting). For more details, see our arXiv companion book: https://arxiv.org/abs/2403.06789 To use SPLADE, please visit our GitHub repository: https://github.com/naver/splade ## Performance | | MRR@10 (MS MARCO dev) | avg nDCG@10 (BEIR-13) | | --- | --- | --- | | `naver/splade-v3-lexical` | 40.0 | 49.1 | ## Citation If you use our checkpoint, please cite our work: ``` @misc{lassance2024spladev3, title={SPLADE-v3: New baselines for SPLADE}, author={Carlos Lassance and Hervé Déjean and Thibault Formal and Stéphane Clinchant}, year={2024}, eprint={2403.06789}, archivePrefix={arXiv}, primaryClass={cs.IR}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["splade"]}
nirantk/splade-v3-lexical
null
[ "transformers", "pytorch", "bert", "fill-mask", "splade", "en", "arxiv:2403.06789", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T03:59:50+00:00
[ "2403.06789" ]
[ "en" ]
TAGS #transformers #pytorch #bert #fill-mask #splade #en #arxiv-2403.06789 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
SPLADE-v3-Lexical ----------------- SPLADE-v3-Lexical is the SPLADE-Lexical version of 'naver/splade-v3' (no expansion on the query side - only term weighting). For more details, see our arXiv companion book: URL To use SPLADE, please visit our GitHub repository: URL Performance ----------- MRR@10 (MS MARCO dev): 'naver/splade-v3-lexical', avg nDCG@10 (BEIR-13): 40.0 If you use our checkpoint, please cite our work:
[]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #splade #en #arxiv-2403.06789 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
transformers
# Uploaded model - **Developed by:** dmorrigan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
dmorrigan/HebrewLyricsLoRA-23K-4Epoch
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:02:13+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: dmorrigan - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: dmorrigan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-to-image
diffusers
# Sub-path Linear Approximation Model (SLAM): DreamShaperV7 Paper: [https://arxiv.org/abs/2404.13903](https://arxiv.org/abs/2404.13903)<br/> Project Page: [https://subpath-linear-approx-model.github.io/](https://subpath-linear-approx-model.github.io/)<br/> The checkpoint is a distilled from [https://huggingface.co/Lykon/dreamshaper-7](https://huggingface.co/Lykon/dreamshaper-7) with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. ```bash pip install --upgrade pip pip install --upgrade diffusers transformers accelerate peft ``` We implement SLAM to be compatible with [LCMScheduler](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler). You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly. ```python from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("alimama-creative/slam-dreamshaper7") # To save GPU memory, torch.float16 can be used, but it may compromise image quality. pipe.to(torch_device="cuda", torch_dtype=torch.float16) prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" num_inference_steps = 4 images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=1, lcm_origin_steps=50, output_type="pil").images ``` ![slam-dreamshaper.png](https://intranetproxy.alipay.com/skylark/lark/0/2024/png/102756509/1714305398411-74a8dd57-a933-42d6-bc43-2e88bce18130.png#clientId=uaea4a13b-3c46-4&from=ui&height=355&id=uc8945fda&originHeight=512&originWidth=512&originalType=binary&ratio=2&rotation=0&showTitle=false&size=386147&status=done&style=none&taskId=ubb40de33-2d75-4880-bb35-546b916b5c5&title=&width=355)
{"license": "apache-2.0", "library_name": "diffusers", "tags": ["text-to-image"], "inference": false}
alimama-creative/slam-dreamshaper7
null
[ "diffusers", "text-to-image", "arxiv:2404.13903", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:02:50+00:00
[ "2404.13903" ]
[]
TAGS #diffusers #text-to-image #arxiv-2404.13903 #license-apache-2.0 #region-us
# Sub-path Linear Approximation Model (SLAM): DreamShaperV7 Paper: URL Project Page: URL The checkpoint is a distilled from URL with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. We implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly. !URL
[ "# Sub-path Linear Approximation Model (SLAM): DreamShaperV7\nPaper: URL\nProject Page: URL\nThe checkpoint is a distilled from URL with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.", "## Usage\nFirst, install the latest version of the Diffusers library as well as peft, accelerate and transformers.\n\n\nWe implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.\n\n!URL" ]
[ "TAGS\n#diffusers #text-to-image #arxiv-2404.13903 #license-apache-2.0 #region-us \n", "# Sub-path Linear Approximation Model (SLAM): DreamShaperV7\nPaper: URL\nProject Page: URL\nThe checkpoint is a distilled from URL with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.", "## Usage\nFirst, install the latest version of the Diffusers library as well as peft, accelerate and transformers.\n\n\nWe implement SLAM to be compatible with LCMScheduler. You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly.\n\n!URL" ]
sentence-similarity
sentence-transformers
# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10') model = AutoModel.from_pretrained('DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 55 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 55.0, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:02:53+00:00
[]
[]
TAGS #sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10 This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Usage (HuggingFace Transformers) Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 55 with parameters: Loss: 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 55 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n", "# DivyaMereddy007/FewLayers_Finetuning_V1_TrainSetenceTransforme-Finetuning_COpyfromv5finetuneEPOC10\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 55 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/6tcc12y
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:07:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# Sub-path Linear Approximation Model (SLAM) LoRA: SDXL Paper: [https://arxiv.org/abs/2404.13903](https://arxiv.org/abs/2404.13903)<br/> Project Page: [https://subpath-linear-approx-model.github.io/](https://subpath-linear-approx-model.github.io/)<br/> The checkpoint is a distilled from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. ```bash pip install --upgrade pip pip install --upgrade diffusers transformers accelerate peft ``` We implement SLAM to be compatible with [LCMScheduler](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler). You can use SLAM-LoRA just like you use LCM-LoRA. ```python import torch from diffusers import LCMScheduler, AutoPipelineForText2Image model_id = "stabilityai/stable-diffusion-xl-base-1.0" adapter_id = "alimama-creative/slam-lora-sdxl" pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16") pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipe.to("cuda") # load and fuse lcm lora pipe.load_lora_weights(adapter_id) pipe.fuse_lora() prompt = "A brown teddy bear holding a glass vase in front of a grave." image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=1.0).images[0] ``` ![slam-lora-sdxl.png](https://intranetproxy.alipay.com/skylark/lark/0/2024/png/102756509/1714304803200-6afaeaf7-cc48-4f7e-8e8c-39d03c81a20e.png#clientId=uaea4a13b-3c46-4&from=ui&id=uf3734880&originHeight=2603&originWidth=1947&originalType=binary&ratio=2&rotation=0&showTitle=false&size=8992761&status=done&style=none&taskId=ubd35ade6-2318-4655-9f63-7b798b78e00&title=)
{"license": "apache-2.0", "library_name": "diffusers", "tags": ["text-to-image"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "inference": false}
alimama-creative/slam-lora-sdxl
null
[ "diffusers", "text-to-image", "arxiv:2404.13903", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:07:28+00:00
[ "2404.13903" ]
[]
TAGS #diffusers #text-to-image #arxiv-2404.13903 #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-apache-2.0 #region-us
# Sub-path Linear Approximation Model (SLAM) LoRA: SDXL Paper: URL Project Page: URL The checkpoint is a distilled from stabilityai/stable-diffusion-xl-base-1.0 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. We implement SLAM to be compatible with LCMScheduler. You can use SLAM-LoRA just like you use LCM-LoRA. !URL
[ "# Sub-path Linear Approximation Model (SLAM) LoRA: SDXL\nPaper: URL\nProject Page: URL\nThe checkpoint is a distilled from stabilityai/stable-diffusion-xl-base-1.0 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.", "## Usage\nFirst, install the latest version of the Diffusers library as well as peft, accelerate and transformers.\n\nWe implement SLAM to be compatible with LCMScheduler. You can use SLAM-LoRA just like you use LCM-LoRA.\n\n\n!URL" ]
[ "TAGS\n#diffusers #text-to-image #arxiv-2404.13903 #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-apache-2.0 #region-us \n", "# Sub-path Linear Approximation Model (SLAM) LoRA: SDXL\nPaper: URL\nProject Page: URL\nThe checkpoint is a distilled from stabilityai/stable-diffusion-xl-base-1.0 with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps.", "## Usage\nFirst, install the latest version of the Diffusers library as well as peft, accelerate and transformers.\n\nWe implement SLAM to be compatible with LCMScheduler. You can use SLAM-LoRA just like you use LCM-LoRA.\n\n\n!URL" ]
image-text-to-text
transformers
### TinyLLaVA We trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as [TinyLLaVA](https://github.com/DLCV-BUAA/TinyLLaVABench). For the Language and Vision models, we chose [OpenELM-450M-Instruct](apple/OpenELM-450M-Instruct) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as [LLaVA](https://github.com/haotian-liu/LLaVA). During testing, we found that [TinyLLaVA-0.55B](https://huggingface.co/jiajunlong/TinyLLaVA-0.55B) exhibited significantly faster inference speed on CPU compared to [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) ### Usage 1. you need to download the generate file "generate_model.py". 2. running the following command: ```bash python generate_model --model jiajunlong/TinyLLaVA-0.89B --prompt 'you want to ask' --image '/path/to/related/image' ``` or execute the following test code: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from generate_model import * model = AutoModelForCausalLM.from_pretrained("jiajunlong/TinyLLaVA-0.55B", trust_remote_code=True) config = model.config tokenizer = AutoTokenizer.from_pretrained("jiajunlong/TinyLLaVA-0.55B", use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side) prompt="you want to ask" image="/path/to/related/image" output_text, genertaion_time = generate(prompt=prompt, image=image, model=model, tokenizer=tokenizer) print_txt = ( f'\r\n{"=" * os.get_terminal_size().columns}\r\n' '\033[1m Prompt + Generated Output\033[0m\r\n' f'{"-" * os.get_terminal_size().columns}\r\n' f'{output_text}\r\n' f'{"-" * os.get_terminal_size().columns}\r\n' '\r\nGeneration took' f'\033[1m\033[92m {round(genertaion_time, 2)} \033[0m' 'seconds.\r\n' ) print(print_txt) ``` ### Result | model_name | gqa | textvqa | sqa | vqav2 | MME | MMB | MM-VET | | :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | | [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) | 60.3 | 51.7 | 60.3 | 76.9 | 1276.5 | 55.2 | 25.8 | | [TinyLLaVA-0.55B](https://huggingface.co/jiajunlong/TinyLLaVA-0.89B) | 53.87 | 44.02 | 54.09 | 71.74 | 1118.75 | 37.8 | 20 |
{"license": "apache-2.0", "pipeline_tag": "image-text-to-text"}
jiajunlong/TinyLLaVA-0.89B
null
[ "transformers", "safetensors", "tinyllava", "text-generation", "image-text-to-text", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2024-04-29T04:09:45+00:00
[]
[]
TAGS #transformers #safetensors #tinyllava #text-generation #image-text-to-text #custom_code #license-apache-2.0 #autotrain_compatible #region-us
### TinyLLaVA We trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as TinyLLaVA. For the Language and Vision models, we chose OpenELM-450M-Instruct and siglip-so400m-patch14-384, respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as LLaVA. During testing, we found that TinyLLaVA-0.55B exhibited significantly faster inference speed on CPU compared to TinyLLaVA-1.5B ### Usage 1. you need to download the generate file "generate\_model.py". 2. running the following command: or execute the following test code: ### Result
[ "### TinyLLaVA\n\n\nWe trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as TinyLLaVA. For the Language and Vision models, we chose OpenELM-450M-Instruct and siglip-so400m-patch14-384, respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as LLaVA. During testing, we found that TinyLLaVA-0.55B exhibited significantly faster inference speed on CPU compared to TinyLLaVA-1.5B", "### Usage\n\n\n1. you need to download the generate file \"generate\\_model.py\".\n2. running the following command:\n\n\nor execute the following test code:", "### Result" ]
[ "TAGS\n#transformers #safetensors #tinyllava #text-generation #image-text-to-text #custom_code #license-apache-2.0 #autotrain_compatible #region-us \n", "### TinyLLaVA\n\n\nWe trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as TinyLLaVA. For the Language and Vision models, we chose OpenELM-450M-Instruct and siglip-so400m-patch14-384, respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as LLaVA. During testing, we found that TinyLLaVA-0.55B exhibited significantly faster inference speed on CPU compared to TinyLLaVA-1.5B", "### Usage\n\n\n1. you need to download the generate file \"generate\\_model.py\".\n2. running the following command:\n\n\nor execute the following test code:", "### Result" ]
null
null
# T3qm7xpShadowm7exp-7B T3qm7xpShadowm7exp-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: nlpguy/T3QM7XP - model: mahiatlinux/ShadowM7EXP-7B merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/T3qm7xpShadowm7exp-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/T3qm7xpShadowm7exp-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:10:23+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us
# T3qm7xpShadowm7exp-7B T3qm7xpShadowm7exp-7B is an automated merge created by Maxime Labonne using the following configuration. ## Configuration ## Usage
[ "# T3qm7xpShadowm7exp-7B\n\nT3qm7xpShadowm7exp-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #license-apache-2.0 #region-us \n", "# T3qm7xpShadowm7exp-7B\n\nT3qm7xpShadowm7exp-7B is an automated merge created by Maxime Labonne using the following configuration.", "## Configuration", "## Usage" ]
text-generation
transformers
The ai-forever/rugpt3large_based_on_gpt2 based model was fine tuned for Question-Answer tasks in Russian. Версия: датасет 60тыс. строк, 1-ая эпоха. В дальнейшем будут появлятся другие модели. Качество ответа: среднее Формат запроса: `<s> [user] Запрос (пока нормально отвечает только на вопросы) [assistant] ... </s>` Пример использования: ``` from transformers import GPT2Tokenizer, GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained("ERmak1581/rugpt3large_for_qna_60k1") tokenizer = GPT2Tokenizer.from_pretrained("ERmak1581/rugpt3large_for_qna_60k1") print(tokenizer.decode(model.generate( tokenizer.encode('<s> [user] Почему небо синее? [assistant]', return_tensors="pt"), max_new_tokens=100, no_repeat_ngram_size=2, temperature=0.7, do_sample=True)[0])) ```
{"language": ["ru"], "license": "mit", "library_name": "transformers", "pipeline_tag": "text-generation"}
ERmak1581/rugpt3large_for_qna_60k1
null
[ "transformers", "safetensors", "gpt2", "text-generation", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:11:15+00:00
[]
[ "ru" ]
TAGS #transformers #safetensors #gpt2 #text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
The ai-forever/rugpt3large_based_on_gpt2 based model was fine tuned for Question-Answer tasks in Russian. Версия: датасет 60тыс. строк, 1-ая эпоха. В дальнейшем будут появлятся другие модели. Качество ответа: среднее Формат запроса: '<s> [user] Запрос (пока нормально отвечает только на вопросы) [assistant] ... </s>' Пример использования:
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #ru #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Paoja/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
Paoja/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-29T04:11:17+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
feature-extraction
transformers
# Model Card ## Model Details - Architecture: ViT-Large with patch size 14 - Training Data: DTD dataset ## Training Details Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32). Only the vision encoder is fine-tuned. ## Evaluation Results - pre-trained: 0.554787278175354 - fine-tuned: 0.8547872304916382
{"datasets": ["tanganke/dtd"], "metrics": ["accuracy"], "base_model": ["openai/clip-vit-large-patch14"]}
tanganke/clip-vit-large-patch14_dtd
null
[ "transformers", "safetensors", "clip_vision_model", "feature-extraction", "dataset:tanganke/dtd", "base_model:openai/clip-vit-large-patch14", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:11:41+00:00
[]
[]
TAGS #transformers #safetensors #clip_vision_model #feature-extraction #dataset-tanganke/dtd #base_model-openai/clip-vit-large-patch14 #endpoints_compatible #region-us
# Model Card ## Model Details - Architecture: ViT-Large with patch size 14 - Training Data: DTD dataset ## Training Details Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32). Only the vision encoder is fine-tuned. ## Evaluation Results - pre-trained: 0.554787278175354 - fine-tuned: 0.8547872304916382
[ "# Model Card", "## Model Details\n\n- Architecture: ViT-Large with patch size 14\n- Training Data: DTD dataset", "## Training Details\n\n Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32).\n Only the vision encoder is fine-tuned.", "## Evaluation Results\n\n- pre-trained: 0.554787278175354\n- fine-tuned: 0.8547872304916382" ]
[ "TAGS\n#transformers #safetensors #clip_vision_model #feature-extraction #dataset-tanganke/dtd #base_model-openai/clip-vit-large-patch14 #endpoints_compatible #region-us \n", "# Model Card", "## Model Details\n\n- Architecture: ViT-Large with patch size 14\n- Training Data: DTD dataset", "## Training Details\n\n Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32).\n Only the vision encoder is fine-tuned.", "## Evaluation Results\n\n- pre-trained: 0.554787278175354\n- fine-tuned: 0.8547872304916382" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Blues-Monster/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
Blues-Monster/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-29T04:12:32+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: Blues-Monster/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Blues-Monster/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Blues-Monster/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
harir/phi-3-mini-review-toxicity
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-29T04:12:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us #has_space
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us #has_space \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
feature-extraction
transformers
## Overview This is a bare model without any output layer or classification head. It has been quantized to be used for feature extraction tasks. **Usage** This model is intended to be used as a base for training on downstream tasks. In order to use it for predictions and inference, it should be fine-tuned on a specific task with an appropriate output layer or classification head added. **Quantization** The model has been quantized using the following parameters: Lora alpha: 16 Lora rank: 32 Lora target modules: all-linear bits: 4 LoftQ iterations: 5
{"pipeline_tag": "feature-extraction"}
smallsuper/Mistral-7B-v0.1-4bit-32rank
null
[ "transformers", "safetensors", "mistral", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:13:33+00:00
[]
[]
TAGS #transformers #safetensors #mistral #feature-extraction #endpoints_compatible #text-generation-inference #region-us
## Overview This is a bare model without any output layer or classification head. It has been quantized to be used for feature extraction tasks. Usage This model is intended to be used as a base for training on downstream tasks. In order to use it for predictions and inference, it should be fine-tuned on a specific task with an appropriate output layer or classification head added. Quantization The model has been quantized using the following parameters: Lora alpha: 16 Lora rank: 32 Lora target modules: all-linear bits: 4 LoftQ iterations: 5
[ "## Overview\n\nThis is a bare model without any output layer or classification head. It has been quantized to be used for feature extraction tasks.\n\nUsage\n\nThis model is intended to be used as a base for training on downstream tasks. In order to use it for predictions and inference, it should be fine-tuned on a specific task with an appropriate output layer or classification head added.\n\nQuantization\n\nThe model has been quantized using the following parameters:\n\nLora alpha: 16\n\nLora rank: 32\n\nLora target modules: all-linear\n\nbits: 4\n\nLoftQ iterations: 5" ]
[ "TAGS\n#transformers #safetensors #mistral #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n", "## Overview\n\nThis is a bare model without any output layer or classification head. It has been quantized to be used for feature extraction tasks.\n\nUsage\n\nThis model is intended to be used as a base for training on downstream tasks. In order to use it for predictions and inference, it should be fine-tuned on a specific task with an appropriate output layer or classification head added.\n\nQuantization\n\nThe model has been quantized using the following parameters:\n\nLora alpha: 16\n\nLora rank: 32\n\nLora target modules: all-linear\n\nbits: 4\n\nLoftQ iterations: 5" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1393 - F1: 0.8696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2592 | 1.0 | 525 | 0.1507 | 0.8269 | | 0.1253 | 2.0 | 1050 | 0.1413 | 0.8550 | | 0.0793 | 3.0 | 1575 | 0.1393 | 0.8696 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
u00890358/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:13:41+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1393 * F1: 0.8696 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-Chat-1M`](https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF --model lwm-text-chat-1m.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF --model lwm-text-chat-1m.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-chat-1m.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:14:18+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-Chat-1M' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-Chat-1M' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-Chat-1M-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-Chat-1M' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ramprasadsoren7061/ol_chiki_tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:14:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
charlieoneill/llama3-8b-hypogen
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:15:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# EcomXL Inpaint ControlNet EcomXL contains a series of text-to-image diffusion models optimized for e-commerce scenarios, developed based on [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).<br/> For e-commerce scenarios, we trained Inpaint ControlNet to control diffusion models. Unlike the inpaint controlnets used for general scenarios, this model is fine-tuned with instance masks to prevent foreground outpainting.  ## Examples <span style="width: 150px !important;display: inline-block;">`Foreground`<span> | <span style="width: 150px !important;display: inline-block;">`Mask`<span> | <span style="width: 150px !important;display: inline-block;">`w/o instance mask`<span> | <span style="width: 150px !important;display: inline-block;">`w/ instance mask`<span> :--:|:--:|:--:|:--: ![images)](./images/inp_0.png) | ![images)](./images/inp_1.png) | ![images)](./images/inp_3.png) | ![images)](./images/inp_3.png) <!-- <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/inp_0.png" width="300"/> | <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/inp_1.png" width="300"/> | <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/inp_2.png" width="300"/> | <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/inp_3.png" width="300"/> --> Using this ControlNet with a control weight of 0.5 may achieve better results. ## Training details In the first phase, the model was trained on 12M laion2B and internal source images with random masks for 20k steps. In the second phase, the model was trained on 3M e-commerce images with the instance mask for 20k steps.<br> Mixed precision: FP16<br> Learning rate: 1e-4<br> batch size: 2048<br> Noise offset: 0.05
{"language": ["en"], "license": "apache-2.0", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "controlnet"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "inference": false, "pipeline_tag": "text-to-image"}
alimama-creative/EcomXL_controlnet_inpaint
null
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:15:28+00:00
[]
[ "en" ]
TAGS #diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #controlnet #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-apache-2.0 #region-us
EcomXL Inpaint ControlNet ========================= EcomXL contains a series of text-to-image diffusion models optimized for e-commerce scenarios, developed based on Stable Diffusion XL. For e-commerce scenarios, we trained Inpaint ControlNet to control diffusion models. Unlike the inpaint controlnets used for general scenarios, this model is fine-tuned with instance masks to prevent foreground outpainting. Examples -------- Using this ControlNet with a control weight of 0.5 may achieve better results. Training details ---------------- In the first phase, the model was trained on 12M laion2B and internal source images with random masks for 20k steps. In the second phase, the model was trained on 3M e-commerce images with the instance mask for 20k steps. Mixed precision: FP16 Learning rate: 1e-4 batch size: 2048 Noise offset: 0.05
[]
[ "TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #controlnet #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-apache-2.0 #region-us \n" ]
text-to-image
diffusers
### Softedge ControlNet EcomXL contains a series of text-to-image diffusion models optimized for e-commerce scenarios, developed based on [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).<br/> The controlnet weights are fine-tuned based on stable-diffusion-xl-base-1.0.  It works well on SDXL as well as community models based on SDXL. The model is trained on general data and e-commerce data, and has good capabilities in both general and e-commerce scenarios. #### Examples <span style="width: 150px !important;display: inline-block;">`Edge`<span> | <span style="width: 150px !important;display: inline-block;">`Output`<span> | <span style="width: 150px !important;display: inline-block;">`Output`<span> | <span style="width: 150px !important;display: inline-block;">`Output`<span> :--:|:--:|:--:|:--: ![images)](./images/edge_0.png) | ![images)](./images/edge_1.png) | ![images)](./images/edge_3.png) | ![images)](./images/edge_3.png) <!-- <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/edge_0.png" width="300"/> | <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/edge_1.png" width="300"/> | <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/edge_2.png" width="300"/> | <img src="https://huggingface.co/alimama-creative/EcomXL/resolve/main/images/edge_3.png" width="300"/> --> #### Training details The model is trained for 37k steps. The training data includes 12M laion2B images and internal sources images, as well as 3M e-commerce images. During training, the softedge preprocessor is randomly selected from pidinet, hed, pidisafe and hedsafe, which are officially supported by Automatic&&Mikubill. <br> Mixed precision: FP16<br> Learning rate: 1e-5<br> batch size: 1024<br> Noise offset: 0.05
{"language": ["en"], "license": "apache-2.0", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "controlnet"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "inference": false, "pipeline_tag": "text-to-image"}
alimama-creative/EcomXL_controlnet_softedge
null
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:15:47+00:00
[]
[ "en" ]
TAGS #diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #controlnet #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-apache-2.0 #region-us
### Softedge ControlNet EcomXL contains a series of text-to-image diffusion models optimized for e-commerce scenarios, developed based on Stable Diffusion XL. The controlnet weights are fine-tuned based on stable-diffusion-xl-base-1.0.  It works well on SDXL as well as community models based on SDXL. The model is trained on general data and e-commerce data, and has good capabilities in both general and e-commerce scenarios. #### Examples #### Training details The model is trained for 37k steps. The training data includes 12M laion2B images and internal sources images, as well as 3M e-commerce images. During training, the softedge preprocessor is randomly selected from pidinet, hed, pidisafe and hedsafe, which are officially supported by Automatic&&Mikubill.  Mixed precision: FP16 Learning rate: 1e-5 batch size: 1024 Noise offset: 0.05
[ "### Softedge ControlNet\n\n\nEcomXL contains a series of text-to-image diffusion models optimized for e-commerce scenarios, developed based on Stable Diffusion XL. \n\nThe controlnet weights are fine-tuned based on stable-diffusion-xl-base-1.0. \nIt works well on SDXL as well as community models based on SDXL.\nThe model is trained on general data and e-commerce data, and has good capabilities in both general and e-commerce scenarios.", "#### Examples", "#### Training details\n\n\nThe model is trained for 37k steps. The training data includes 12M laion2B images and internal sources images, as well as 3M e-commerce images. During training, the softedge preprocessor is randomly selected from pidinet, hed, pidisafe and hedsafe, which are officially supported by Automatic&&Mikubill.  \n\nMixed precision: FP16 \n\nLearning rate: 1e-5 \n\nbatch size: 1024 \n\nNoise offset: 0.05" ]
[ "TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #controlnet #en #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-apache-2.0 #region-us \n", "### Softedge ControlNet\n\n\nEcomXL contains a series of text-to-image diffusion models optimized for e-commerce scenarios, developed based on Stable Diffusion XL. \n\nThe controlnet weights are fine-tuned based on stable-diffusion-xl-base-1.0. \nIt works well on SDXL as well as community models based on SDXL.\nThe model is trained on general data and e-commerce data, and has good capabilities in both general and e-commerce scenarios.", "#### Examples", "#### Training details\n\n\nThe model is trained for 37k steps. The training data includes 12M laion2B images and internal sources images, as well as 3M e-commerce images. During training, the softedge preprocessor is randomly selected from pidinet, hed, pidisafe and hedsafe, which are officially supported by Automatic&&Mikubill.  \n\nMixed precision: FP16 \n\nLearning rate: 1e-5 \n\nbatch size: 1024 \n\nNoise offset: 0.05" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
MLGuy2/Team7
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:16:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/e3a99ur
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:16:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Paoja/Taxiv3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxiv3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]}
Paoja/Taxiv3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-29T04:16:37+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
null
# DavidAU/LWM-Text-1M-Q6_K-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-1M`](https://huggingface.co/LargeWorldModel/LWM-Text-1M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-1M) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-1M-Q6_K-GGUF --model lwm-text-1m.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-1M-Q6_K-GGUF --model lwm-text-1m.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-1m.Q6_K.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-1M-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:19:00+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-1M-Q6_K-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-1M' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-1M-Q6_K-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-1M' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-1M-Q6_K-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-1M' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
## FinguAI-Chat-Mid-v1 ### Overview The FinguAI-Chat-Mid-v1 model offers a specialized curriculum tailored to English, Korean, and Japanese speakers interested in finance, investment, and legal frameworks. It aims to enhance language proficiency while providing insights into global finance markets and regulatory landscapes. ### Key Features - **Global Perspective**: Explores diverse financial markets and regulations across English, Korean, and Japanese contexts. - **Language Proficiency**: Enhances language skills in English, Korean, and Japanese for effective communication in finance and legal domains. - **Career Advancement**: Equips learners with knowledge and skills for roles in investment banking, corporate finance, asset management, and regulatory compliance. ### Model Information - **Model Name**: FinguAI-Chat-Mid-v1 - **Checkpoint**: FinguAI-Chat-Mid-v1 - **Author**: Grinda AI Inc. - **License**: Apache-2.0 ### Training Details - **Fine-Tuning**: The model was fine-tuned on the base model [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - through ORPO fine-tuning using the TrL Library and Transformer. - **Dataset**: The fine-tuning dataset consisted of 28178 training samples. ### How to Use To use the FinguAI-Chat-Mid-v1 model, you can utilize the Hugging Face Transformers library. Here's a Python code snippet demonstrating how to load the model and generate predictions: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig,TextStreamer model_id = 'FinguAI-Chat-Mid-v1' model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="flash_attention_2", torch_dtype= torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_id) streamer = TextStreamer(tokenizer) model.to('cuda') messages = [ {"role": "system","content": " you are as a finance specialist, help the user and provide accurat information."}, {"role": "user", "content": " what are the best approch to prevent loss?"}, ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") generation_params = { 'max_new_tokens': 1000, 'use_cache': True, 'do_sample': True, 'temperature': 0.7, 'top_p': 0.9, 'top_k': 50, 'eos_token_id': tokenizer.eos_token_id, } outputs = model.generate(tokenized_chat, **generation_params, streamer=streamer) decoded_outputs = tokenizer.batch_decode(outputs) ```
{"license": "mit", "tags": ["trl", "orpo", "generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "FinguAI-Chat-Mid-v1", "results": []}]}
FINGU-AI/FinguAI-Chat-Mid-v1
null
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "orpo", "generated_from_trainer", "conversational", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-29T04:19:05+00:00
[]
[]
TAGS #transformers #safetensors #phi3 #text-generation #trl #orpo #generated_from_trainer #conversational #custom_code #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #autotrain_compatible #endpoints_compatible #region-us #has_space
## FinguAI-Chat-Mid-v1 ### Overview The FinguAI-Chat-Mid-v1 model offers a specialized curriculum tailored to English, Korean, and Japanese speakers interested in finance, investment, and legal frameworks. It aims to enhance language proficiency while providing insights into global finance markets and regulatory landscapes. ### Key Features - Global Perspective: Explores diverse financial markets and regulations across English, Korean, and Japanese contexts. - Language Proficiency: Enhances language skills in English, Korean, and Japanese for effective communication in finance and legal domains. - Career Advancement: Equips learners with knowledge and skills for roles in investment banking, corporate finance, asset management, and regulatory compliance. ### Model Information - Model Name: FinguAI-Chat-Mid-v1 - Checkpoint: FinguAI-Chat-Mid-v1 - Author: Grinda AI Inc. - License: Apache-2.0 ### Training Details - Fine-Tuning: The model was fine-tuned on the base model microsoft/Phi-3-mini-4k-instruct - through ORPO fine-tuning using the TrL Library and Transformer. - Dataset: The fine-tuning dataset consisted of 28178 training samples. ### How to Use To use the FinguAI-Chat-Mid-v1 model, you can utilize the Hugging Face Transformers library. Here's a Python code snippet demonstrating how to load the model and generate predictions:
[ "## FinguAI-Chat-Mid-v1", "### Overview\n\nThe FinguAI-Chat-Mid-v1 model offers a specialized curriculum tailored to English, Korean, and Japanese speakers interested in finance, investment, and legal frameworks. \nIt aims to enhance language proficiency while providing insights into global finance markets and regulatory landscapes.", "### Key Features\n\n- Global Perspective: Explores diverse financial markets and regulations across English, Korean, and Japanese contexts.\n- Language Proficiency: Enhances language skills in English, Korean, and Japanese for effective communication in finance and legal domains.\n- Career Advancement: Equips learners with knowledge and skills for roles in investment banking, corporate finance, asset management, and regulatory compliance.", "### Model Information\n\n- Model Name: FinguAI-Chat-Mid-v1\n- Checkpoint: FinguAI-Chat-Mid-v1\n- Author: Grinda AI Inc.\n- License: Apache-2.0", "### Training Details\n\n- Fine-Tuning: The model was fine-tuned on the base model microsoft/Phi-3-mini-4k-instruct\n- through ORPO fine-tuning using the TrL Library and Transformer.\n- Dataset: The fine-tuning dataset consisted of 28178 training samples.", "### How to Use\n\nTo use the FinguAI-Chat-Mid-v1 model, you can utilize the Hugging Face Transformers library. \nHere's a Python code snippet demonstrating how to load the model and generate predictions:" ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #trl #orpo #generated_from_trainer #conversational #custom_code #base_model-microsoft/Phi-3-mini-4k-instruct #license-mit #autotrain_compatible #endpoints_compatible #region-us #has_space \n", "## FinguAI-Chat-Mid-v1", "### Overview\n\nThe FinguAI-Chat-Mid-v1 model offers a specialized curriculum tailored to English, Korean, and Japanese speakers interested in finance, investment, and legal frameworks. \nIt aims to enhance language proficiency while providing insights into global finance markets and regulatory landscapes.", "### Key Features\n\n- Global Perspective: Explores diverse financial markets and regulations across English, Korean, and Japanese contexts.\n- Language Proficiency: Enhances language skills in English, Korean, and Japanese for effective communication in finance and legal domains.\n- Career Advancement: Equips learners with knowledge and skills for roles in investment banking, corporate finance, asset management, and regulatory compliance.", "### Model Information\n\n- Model Name: FinguAI-Chat-Mid-v1\n- Checkpoint: FinguAI-Chat-Mid-v1\n- Author: Grinda AI Inc.\n- License: Apache-2.0", "### Training Details\n\n- Fine-Tuning: The model was fine-tuned on the base model microsoft/Phi-3-mini-4k-instruct\n- through ORPO fine-tuning using the TrL Library and Transformer.\n- Dataset: The fine-tuning dataset consisted of 28178 training samples.", "### How to Use\n\nTo use the FinguAI-Chat-Mid-v1 model, you can utilize the Hugging Face Transformers library. \nHere's a Python code snippet demonstrating how to load the model and generate predictions:" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # G0428HMA16 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6372 | 0.09 | 10 | 1.7096 | | 1.1238 | 0.18 | 20 | 0.4795 | | 0.2595 | 0.27 | 30 | 0.1668 | | 0.155 | 0.36 | 40 | 0.1603 | | 0.1489 | 0.45 | 50 | 0.1483 | | 0.1474 | 0.54 | 60 | 0.1499 | | 0.1479 | 0.63 | 70 | 0.1470 | | 0.1491 | 0.73 | 80 | 0.1479 | | 0.1413 | 0.82 | 90 | 0.1486 | | 0.1448 | 0.91 | 100 | 0.1479 | | 0.1492 | 1.0 | 110 | 0.1488 | | 0.1429 | 1.09 | 120 | 0.1485 | | 0.1447 | 1.18 | 130 | 0.1485 | | 0.146 | 1.27 | 140 | 0.1473 | | 0.1478 | 1.36 | 150 | 0.1466 | | 0.1423 | 1.45 | 160 | 0.1507 | | 0.1434 | 1.54 | 170 | 0.1435 | | 0.1392 | 1.63 | 180 | 0.1377 | | 0.1379 | 1.72 | 190 | 0.1359 | | 0.1285 | 1.81 | 200 | 0.1294 | | 0.1271 | 1.9 | 210 | 0.1303 | | 0.1269 | 1.99 | 220 | 0.1228 | | 0.1118 | 2.08 | 230 | 0.1210 | | 0.1144 | 2.18 | 240 | 0.1153 | | 0.1106 | 2.27 | 250 | 0.1123 | | 0.1116 | 2.36 | 260 | 0.1155 | | 0.1158 | 2.45 | 270 | 0.1118 | | 0.1066 | 2.54 | 280 | 0.1109 | | 0.0991 | 2.63 | 290 | 0.1098 | | 0.1016 | 2.72 | 300 | 0.1064 | | 0.1029 | 2.81 | 310 | 0.1058 | | 0.1052 | 2.9 | 320 | 0.1057 | | 0.106 | 2.99 | 330 | 0.1057 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "gemma", "tags": ["generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "G0428HMA16", "results": []}]}
Litzy619/G0428HMA16
null
[ "safetensors", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-04-29T04:19:11+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
G0428HMA16 ========== This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1057 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
# DavidAU/LWM-Text-512K-Q8_0-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-512K`](https://huggingface.co/LargeWorldModel/LWM-Text-512K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-512K) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-512K-Q8_0-GGUF --model lwm-text-512k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-512K-Q8_0-GGUF --model lwm-text-512k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-512k.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-512K-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:19:42+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-512K-Q8_0-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-512K' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-512K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-512K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-512K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-512K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
null
## Exllama v2 Quantizations of dolphin-2.9-llama3-8b-256k Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b-256k ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-256k-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-256k-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-256k-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-256k-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-256k-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-256k-exl2 dolphin-2.9-llama3-8b-256k-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/dolphin-2.9-llama3-8b-256k-exl2 --revision 6_5 --local-dir dolphin-2.9-llama3-8b-256k-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/dolphin-2.9-llama3-8b-256k-exl2 --revision 6_5 --local-dir dolphin-2.9-llama3-8b-256k-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "llama3", "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/dolphin-2.9-llama3-8b-256k-exl2
null
[ "text-generation", "license:llama3", "region:us" ]
null
2024-04-29T04:21:03+00:00
[]
[]
TAGS #text-generation #license-llama3 #region-us
Exllama v2 Quantizations of dolphin-2.9-llama3-8b-256k ------------------------------------------------------ Using <a href="URL ExLlamaV2 v0.0.20 for quantization. **The "main" branch only contains the URL, download one of the other branches for the model (see below)** Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions. Original model: URL Prompt format ------------- Available sizes --------------- Download instructions --------------------- With git: With huggingface hub (credit to TheBloke for instructions): To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch: Linux: Windows (which apparently doesn't like \_ in folders sometimes?): Want to support my work? Visit my ko-fi page here: URL
[]
[ "TAGS\n#text-generation #license-llama3 #region-us \n" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
spsither/mms_300_v2.1190
null
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:21:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #wav2vec2 #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
## HOW TO Use frankmorales2020/torchtune-Llama-2-7b ARTICLE: https://medium.com/ai-in-plain-english/torchtune-simplifying-llm-fine-tuning-8811d2bb25a5 # STEP1 ```python !pip install torchtune -q ``` # STEP2 ```python !tune -h ``` ``` usage: tune [-h] {download,ls,cp,run,validate} ... Welcome to the TorchTune CLI! options: -h, --help show this help message and exit subcommands: {download,ls,cp,run,validate} download Download a model from the Hugging Face Hub. ls List all built-in recipes and configs cp Copy a built-in recipe or config to a local path. run Run a recipe. For distributed recipes, this supports all torchrun arguments. validate Validate a config and ensure that it is well-formed. ``` # STEP3 ```python !tune download frankmorales2020/torchtune-Llama-2-7b --output-dir /tmp/Llama-2-7b-hf ``` # STEP4 ```python !tune cp generation /content/custom_generation_config.yaml ``` # STEP5 ```python !tune run generate --config /content/custom_generation_config.yaml prompt="What are some interesting sites to visit in the Bay Area?" ``` ``` INFO:torchtune.utils.logging:Running InferenceRecipe with resolved config: checkpointer: _component_: torchtune.utils.FullModelHFCheckpointer checkpoint_dir: /tmp/Llama-2-7b-hf/ checkpoint_files: - hf_model_0001_0.pt - hf_model_0002_0.pt model_type: LLAMA2 output_dir: /tmp/Llama-2-7b-hf/ device: cuda dtype: bf16 max_new_tokens: 300 model: _component_: torchtune.models.llama2.llama2_7b prompt: What are some interesting sites to visit in the Bay Area? quantizer: null seed: 1234 temperature: 0.6 tokenizer: _component_: torchtune.models.llama2.llama2_tokenizer path: /tmp/Llama-2-7b-hf/tokenizer.model top_k: 300 DEBUG:torchtune.utils.logging:Setting manual seed to local seed 1234. Local seed is seed + rank = 1234 + 0 INFO:torchtune.utils.logging:Model is initialized with precision torch.bfloat16. INFO:torchtune.utils.logging: What are some interesting sites to visit in the Bay Area? What are some interesting sites to visit in the Bay Area? The Bay Area is home to many interesting sites, from the iconic Golden Gate Bridge to the quirky Alcatraz Island. Here are some of the most interesting sites to visit in the Bay Area: Golden Gate Bridge: This suspension bridge is one of the most recognizable landmarks in the world. It spans the Golden Gate Strait, connecting San Francisco to Marin County. Visitors can take a walk or bike ride across the bridge and enjoy the stunning views of the bay. Alcatraz Island: This former prison is now a popular tourist attraction. Visitors can take a ferry to the island and explore the cell blocks, hospital, and other buildings. There is also a museum on the island, which tells the history of the prison and its most famous inmates. Coit Tower: This 210-foot tower is located in the Telegraph Hill neighborhood of San Francisco. It offers panoramic views of the city and the bay. Visitors can take an elevator to the top of the tower and enjoy the views. Sutro Baths: These ruins are located in the Lands End area of San Francisco. The baths were a popular swimming and spa destination in the late 19th century, but closed in 1966. Today, visitors can explore the ru INFO:torchtune.utils.logging:Time for inference: 20.15 sec total, 14.89 tokens/sec INFO:torchtune.utils.logging:Bandwidth achieved: 233.66 GB/s INFO:torchtune.utils.logging:Memory used: 15.72 GB ``` # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)| |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
{"language": ["en"], "license": "llama2", "tags": ["facebook", "meta", "pytorch", "llama", "llama-2"], "extra_gated_heading": "You need to share contact information with Meta to access this model", "extra_gated_prompt": "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. \n\"Documentation\" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. \n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. \n\"Llama 2\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.\n\"Llama Materials\" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking \"I Accept\" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n1. License Rights and Redistribution. \na. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use.\ni. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. \nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a \"Notice\" text file distributed as a part of such copies: \"Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.\"\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). \n\n2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \"AS IS\" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.\nb. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (\u201cPolicy\u201d). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to:\n1. Violate the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: \n 1. Violence or terrorism \n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices \n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following:\n 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 2 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Llama 2 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n 4. Fail to appropriately disclose to end users any known dangers of your AI system \nPlease report any violation of this Policy, software \u201cbug,\u201d or other problems that could lead to a violation of this Policy through one of the following means: \n * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\n * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) \n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [[email protected]](mailto:[email protected])", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit", "pipeline_tag": "text-generation"}
frankmorales2020/torchtune-Llama-2-7b
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:22:07+00:00
[ "2307.09288" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #llama #text-generation #facebook #meta #llama-2 #en #arxiv-2307.09288 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
HOW TO Use frankmorales2020/torchtune-Llama-2-7b ------------------------------------------------ ARTICLE: URL STEP1 ===== STEP2 ===== STEP3 ===== STEP4 ===== STEP5 ===== Llama 2 ======= Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. Model Details ------------- *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. Model Developers Meta Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. Model Dates Llama 2 was trained between January 2023 and July 2023. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License A custom commercial license is available at: URL Research Paper "Llama-2: Open Foundation and Fine-tuned Chat Models" Intended Use ------------ Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the 'INST' and '<>' tags, 'BOS' and 'EOS' tokens, and the whitespaces and breaklines in between (we recommend calling 'strip()' on inputs to avoid double-spaces). See our reference code in github for details: 'chat\_completion'. Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. Hardware and Software --------------------- Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. Carbon Footprint Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. Training Data ------------- Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. Evaluation Results ------------------ In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. Overall performance on grouped academic benchmarks. *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above. Ethical Considerations and Limitations -------------------------------------- Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at URL Reporting Issues ---------------- Please report any software “bug,” or other problems with the models through one of the following means: * Reporting issues with the model: URL * Reporting problematic content generated by the model: URL * Reporting bugs and security concerns: URL Llama Model Index -----------------
[]
[ "TAGS\n#transformers #pytorch #safetensors #llama #text-generation #facebook #meta #llama-2 #en #arxiv-2307.09288 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
aiqwe/gemma-2b-it-example-v1
null
[ "transformers", "tensorboard", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:23:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428B1 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1466 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7159 | 0.09 | 10 | 1.9907 | | 1.3938 | 0.18 | 20 | 0.5917 | | 0.2804 | 0.27 | 30 | 0.1637 | | 0.164 | 0.36 | 40 | 0.1546 | | 0.1511 | 0.45 | 50 | 0.1503 | | 0.1892 | 0.54 | 60 | 0.1511 | | 0.1599 | 0.63 | 70 | 0.1479 | | 0.1619 | 0.73 | 80 | 0.1522 | | 0.1444 | 0.82 | 90 | 0.1492 | | 0.1531 | 0.91 | 100 | 0.1480 | | 0.1561 | 1.0 | 110 | 0.1506 | | 0.1445 | 1.09 | 120 | 0.1471 | | 0.1713 | 1.18 | 130 | 0.1497 | | 0.1559 | 1.27 | 140 | 0.1478 | | 0.1598 | 1.36 | 150 | 0.1466 | | 0.143 | 1.45 | 160 | 0.1484 | | 0.144 | 1.54 | 170 | 0.1462 | | 0.1456 | 1.63 | 180 | 0.1464 | | 0.1548 | 1.72 | 190 | 0.1495 | | 0.1586 | 1.81 | 200 | 0.1473 | | 0.1459 | 1.9 | 210 | 0.1471 | | 0.1447 | 1.99 | 220 | 0.1486 | | 0.1457 | 2.08 | 230 | 0.1464 | | 0.1667 | 2.18 | 240 | 0.1467 | | 0.1431 | 2.27 | 250 | 0.1466 | | 0.1444 | 2.36 | 260 | 0.1474 | | 0.1507 | 2.45 | 270 | 0.1473 | | 0.1514 | 2.54 | 280 | 0.1468 | | 0.1538 | 2.63 | 290 | 0.1470 | | 0.1443 | 2.72 | 300 | 0.1467 | | 0.1444 | 2.81 | 310 | 0.1467 | | 0.1641 | 2.9 | 320 | 0.1466 | | 0.1564 | 2.99 | 330 | 0.1466 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428B1", "results": []}]}
Litzy619/O0428B1
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:25:06+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
O0428B1 ======= This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1466 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 32 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 32\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 32\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # food_model_calsification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3138 - Accuracy: 0.904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7648 | 0.992 | 62 | 2.5554 | 0.844 | | 1.786 | 2.0 | 125 | 1.6917 | 0.881 | | 1.4047 | 2.992 | 187 | 1.3760 | 0.912 | | 1.2497 | 3.968 | 248 | 1.3138 | 0.904 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "food_model_calsification", "results": []}]}
georffrey/food_model_calsification
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:27:13+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
food\_model\_calsification ========================== This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3138 * Accuracy: 0.904 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/vfojwet
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:27:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# **BLOSSOM-v5-32b** [💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/) ### What's new? The Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements. ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Qwen1.5-32B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs. ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: ``` Multi-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today?<|endoftext|> |Human|: Generate a random number using python |Bot|: ``` Note: At the end of the Bot's output in the historical conversation, append a `<|endoftext|>`.
{"language": ["zh", "en"], "license": "apache-2.0", "datasets": ["Azure99/blossom-chat-v3", "Azure99/blossom-math-v4", "Azure99/blossom-wizard-v3", "Azure99/blossom-orca-v3"]}
Azure99/blossom-v5-32b
null
[ "transformers", "safetensors", "qwen2", "text-generation", "zh", "en", "dataset:Azure99/blossom-chat-v3", "dataset:Azure99/blossom-math-v4", "dataset:Azure99/blossom-wizard-v3", "dataset:Azure99/blossom-orca-v3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:28:02+00:00
[]
[ "zh", "en" ]
TAGS #transformers #safetensors #qwen2 #text-generation #zh #en #dataset-Azure99/blossom-chat-v3 #dataset-Azure99/blossom-math-v4 #dataset-Azure99/blossom-wizard-v3 #dataset-Azure99/blossom-orca-v3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# BLOSSOM-v5-32b Github • Blossom Chat Demo ### What's new? The Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements. ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Qwen1.5-32B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs. ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue Multi-turn dialogue Note: At the end of the Bot's output in the historical conversation, append a '<|endoftext|>'.
[ "# BLOSSOM-v5-32b\n\nGithub • Blossom Chat Demo", "### What's new?\n\nThe Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements.", "### Introduction\n\nBlossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Qwen1.5-32B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.\n\nTraining was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.", "### Inference\n\nInference is performed in the form of dialogue continuation.\n\nSingle-turn dialogue\n\n\n\nMulti-turn dialogue\n\n\n\nNote: At the end of the Bot's output in the historical conversation, append a '<|endoftext|>'." ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #zh #en #dataset-Azure99/blossom-chat-v3 #dataset-Azure99/blossom-math-v4 #dataset-Azure99/blossom-wizard-v3 #dataset-Azure99/blossom-orca-v3 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# BLOSSOM-v5-32b\n\nGithub • Blossom Chat Demo", "### What's new?\n\nThe Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements.", "### Introduction\n\nBlossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Qwen1.5-32B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.\n\nTraining was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.", "### Inference\n\nInference is performed in the form of dialogue continuation.\n\nSingle-turn dialogue\n\n\n\nMulti-turn dialogue\n\n\n\nNote: At the end of the Bot's output in the historical conversation, append a '<|endoftext|>'." ]
text-generation
transformers
# mistral-orpo-capybara-3k This model is a full fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with ORPO on the [eduagarcia/capybara-dpo-3k](https://huggingface.co/datasets/eduagarcia/capybara-dpo-3k) dataset with the [huggingface/alignment-handbook](https://github.com/huggingface/alignment-handbook). ## Model description Trained for 4.5 hours on 1xA100 ### Aligment Handbook recipe ```yaml # Model arguments model_name_or_path: mistralai/Mistral-7B-v0.1 model_revision: main torch_dtype: bfloat16 use_flash_attention_2: true trust_remote_code: true # Data training arguments chat_template: "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}" dataset_mixer: eduagarcia/capybara-dpo-3k: 1.0 dataset_splits: - train - test preprocessing_num_workers: 8 # ORPOTrainer arguments bf16: true beta: 0.05 gradient_accumulation_steps: 8 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true hub_model_id: mistral-orpo-capybara-3k learning_rate: 5.0e-6 log_level: info logging_steps: 10 lr_scheduler_type: inverse_sqrt max_length: 2048 max_prompt_length: 1792 num_train_epochs: 1 optim: adamw_bnb_8bit output_dir: data/mistral-orpo-capybara-3k per_device_train_batch_size: 4 push_to_hub: true report_to: - tensorboard - wandb save_strategy: "no" seed: 42 warmup_steps: 100 ``` ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["alignment-handbook", "trl", "orpo", "generated_from_trainer"], "datasets": ["eduagarcia/capybara-dpo-3k"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-orpo-capybara-3k", "results": []}]}
eduagarcia/mistral-orpo-capybara-3k
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "orpo", "generated_from_trainer", "conversational", "en", "dataset:eduagarcia/capybara-dpo-3k", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:28:13+00:00
[]
[ "en" ]
TAGS #transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #orpo #generated_from_trainer #conversational #en #dataset-eduagarcia/capybara-dpo-3k #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mistral-orpo-capybara-3k This model is a full fine-tuned version of mistralai/Mistral-7B-v0.1 with ORPO on the eduagarcia/capybara-dpo-3k dataset with the huggingface/alignment-handbook. ## Model description Trained for 4.5 hours on 1xA100 ### Aligment Handbook recipe ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# mistral-orpo-capybara-3k\n\nThis model is a full fine-tuned version of mistralai/Mistral-7B-v0.1 with ORPO on the eduagarcia/capybara-dpo-3k dataset with the huggingface/alignment-handbook.", "## Model description\n\nTrained for 4.5 hours on 1xA100", "### Aligment Handbook recipe", "### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mistral #text-generation #alignment-handbook #trl #orpo #generated_from_trainer #conversational #en #dataset-eduagarcia/capybara-dpo-3k #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mistral-orpo-capybara-3k\n\nThis model is a full fine-tuned version of mistralai/Mistral-7B-v0.1 with ORPO on the eduagarcia/capybara-dpo-3k dataset with the huggingface/alignment-handbook.", "## Model description\n\nTrained for 4.5 hours on 1xA100", "### Aligment Handbook recipe", "### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/a50avyu
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:29:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-small-komodel This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the jsfamily/test-small-komodel dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ko"], "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "generated_from_trainer"], "datasets": ["jsfamily/test-small-komodel"], "base_model": "openai/whisper-small", "model-index": [{"name": "test-small-komodel", "results": []}]}
jsfamily/test-small-komodel
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ko", "dataset:jsfamily/test-small-komodel", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:30:22+00:00
[]
[ "ko" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #hf-asr-leaderboard #generated_from_trainer #ko #dataset-jsfamily/test-small-komodel #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
# test-small-komodel This model is a fine-tuned version of openai/whisper-small on the jsfamily/test-small-komodel dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# test-small-komodel\n\nThis model is a fine-tuned version of openai/whisper-small on the jsfamily/test-small-komodel dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 4000", "### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #hf-asr-leaderboard #generated_from_trainer #ko #dataset-jsfamily/test-small-komodel #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n", "# test-small-komodel\n\nThis model is a fine-tuned version of openai/whisper-small on the jsfamily/test-small-komodel dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 4000", "### Framework versions\n\n- Transformers 4.39.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ramikan-BR/tinyllama_PY-CODER-bnb-4bit-lora_4k-v2_q4_k_m
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-29T04:34:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# arcee-ai/Llama-3-MegaMed-8B-Model-Stock arcee-ai/Llama-3-MegaMed-8B-Model-Stock is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): ## 🧩 Configuration ```yaml models: - model: aaditya/OpenBioLLM-Llama3-8B - model: johnsnowlabs/JSL-Med-Sft-Llama-3-8B - model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 merge_method: model_stock base_model: meta-llama/Meta-Llama-3-8B dtype: float16 ```
{"license": "apache-2.0", "tags": ["merge", "mergekit"]}
arcee-ai/Llama-3-MegaMed-8B-Model-Stock
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:35:26+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #mergekit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# arcee-ai/Llama-3-MegaMed-8B-Model-Stock arcee-ai/Llama-3-MegaMed-8B-Model-Stock is a merge of the following models using mergekit: ## Configuration
[ "# arcee-ai/Llama-3-MegaMed-8B-Model-Stock\n\narcee-ai/Llama-3-MegaMed-8B-Model-Stock is a merge of the following models using mergekit:", "## Configuration" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# arcee-ai/Llama-3-MegaMed-8B-Model-Stock\n\narcee-ai/Llama-3-MegaMed-8B-Model-Stock is a merge of the following models using mergekit:", "## Configuration" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Crysiss/llama-3-8B-healthcare-15000
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:36:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kssumanth6/t5_small_chit_chat_generator_v3
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:36:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
bsheon/adsl
null
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:37:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
## モデル - ベースモデル:[ryota39/llm-jp-1b-sft-100k-LoRA](https://huggingface.co/ryota39/llm-jp-1b-sft-100k-LoRA) - 学習データセット:[ryota39/dpo-ja-194k](https://huggingface.co/datasets/ryota39/dpo-ja-194k) - 学習方式:フルパラメータチューニング ## サンプル ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained( "ryota39/llm-jp-1b-sft-15k" ) pad_token_id = tokenizer.pad_token_id model = AutoModelForCausalLM.from_pretrained( "ryota39/llm-jp-1b-sft-15k", device_map="auto", ) text = "###Input: 東京の観光名所を教えてください。\n###Output: " tokenized_input = tokenizer.encode( text, add_special_tokens=False, return_tensors="pt" ).to(model.device) attention_mask = torch.ones_like(tokenized_input) attention_mask[tokenized_input == pad_token_id] = 0 with torch.no_grad(): output = model.generate( tokenized_input, attention_mask=attention_mask, max_new_tokens=128, do_sample=True, top_p=0.95, temperature=0.8, repetition_penalty=1.0 )[0] print(tokenizer.decode(output)) ``` ## 出力例 ``` ###Input: 東京の観光名所を教えてください。 ###Output: 浅草寺。東京都台東区にある日本の仏教寺院。 浅草寺は、徳川家の菩提寺として有名な寺院。この寺は、創建から200年以上もの歴史を持ち、多くの人々から信仰されている。 また、境内には多くの建造物があり、歴史を感じることが出来る。また、境内には雷門があり、多くの人が訪れている。 また、本堂の中には、本尊である釈迦如来と文殊菩薩が安置されており、歴史を感じながら参拝することが出来る。この浅草寺は、東京都の観光 ``` ## 謝辞 本成果は【LOCAL AI HACKATHON #001】240時間ハッカソンの成果です。 運営の方々に深く御礼申し上げます。 - 【メタデータラボ株式会社】様 - 【AI声づくり技術研究会】 - サーバー主:やなぎ(Yanagi)様 - 【ローカルLLMに向き合う会】 - サーバー主:saldra(サルドラ)様 [メタデータラボ、日本最大規模のAIハッカソン「LOCAL AI HACKATHON #001」~ AIの民主化 ~を開催、本日より出場チームの募集を開始](https://prtimes.jp/main/html/rd/p/000000008.000056944.html)
{"language": ["ja"], "license": "cc", "library_name": "transformers", "tags": ["dpo"], "datasets": ["ryota39/dpo-ja-194k"]}
ryota39/llm-jp-1b-sft-100k-LoRA-dpo-194k
null
[ "transformers", "safetensors", "gpt2", "text-generation", "dpo", "ja", "dataset:ryota39/dpo-ja-194k", "license:cc", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:37:39+00:00
[]
[ "ja" ]
TAGS #transformers #safetensors #gpt2 #text-generation #dpo #ja #dataset-ryota39/dpo-ja-194k #license-cc #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## モデル - ベースモデル:ryota39/llm-jp-1b-sft-100k-LoRA - 学習データセット:ryota39/dpo-ja-194k - 学習方式:フルパラメータチューニング ## サンプル ## 出力例 ## 謝辞 本成果は【LOCAL AI HACKATHON #001】240時間ハッカソンの成果です。 運営の方々に深く御礼申し上げます。 - 【メタデータラボ株式会社】様 - 【AI声づくり技術研究会】 - サーバー主:やなぎ(Yanagi)様 - 【ローカルLLMに向き合う会】 - サーバー主:saldra(サルドラ)様 メタデータラボ、日本最大規模のAIハッカソン「LOCAL AI HACKATHON #001」~ AIの民主化 ~を開催、本日より出場チームの募集を開始
[ "## モデル\n\n- ベースモデル:ryota39/llm-jp-1b-sft-100k-LoRA\n- 学習データセット:ryota39/dpo-ja-194k\n- 学習方式:フルパラメータチューニング", "## サンプル", "## 出力例", "## 謝辞\n\n本成果は【LOCAL AI HACKATHON #001】240時間ハッカソンの成果です。\n運営の方々に深く御礼申し上げます。\n\n- 【メタデータラボ株式会社】様\n- 【AI声づくり技術研究会】\n - サーバー主:やなぎ(Yanagi)様\n- 【ローカルLLMに向き合う会】\n - サーバー主:saldra(サルドラ)様\n\nメタデータラボ、日本最大規模のAIハッカソン「LOCAL AI HACKATHON #001」~ AIの民主化 ~を開催、本日より出場チームの募集を開始" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #dpo #ja #dataset-ryota39/dpo-ja-194k #license-cc #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## モデル\n\n- ベースモデル:ryota39/llm-jp-1b-sft-100k-LoRA\n- 学習データセット:ryota39/dpo-ja-194k\n- 学習方式:フルパラメータチューニング", "## サンプル", "## 出力例", "## 謝辞\n\n本成果は【LOCAL AI HACKATHON #001】240時間ハッカソンの成果です。\n運営の方々に深く御礼申し上げます。\n\n- 【メタデータラボ株式会社】様\n- 【AI声づくり技術研究会】\n - サーバー主:やなぎ(Yanagi)様\n- 【ローカルLLMに向き合う会】\n - サーバー主:saldra(サルドラ)様\n\nメタデータラボ、日本最大規模のAIハッカソン「LOCAL AI HACKATHON #001」~ AIの民主化 ~を開催、本日より出場チームの募集を開始" ]
text-to-image
diffusers
# SDXL LoRA DreamBooth - aarashfeizi/jean-francois-godbout <Gallery /> ## Model description ### These are aarashfeizi/jean-francois-godbout LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout.safetensors` here 💾](/aarashfeizi/jean-francois-godbout/blob/main//home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb.safetensors` here 💾](/aarashfeizi/jean-francois-godbout/blob/main//home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb` to your prompt. For example, `A photo of Jean-Francois Godbout` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('aarashfeizi/jean-francois-godbout', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='aarashfeizi/jean-francois-godbout', filename='/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of Jean-Francois Godbout talking with Joe Biden').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/aarashfeizi/jean-francois-godbout/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
{"license": "openrail++", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "diffusers", "lora", "template:sd-lora"], "widget": [{"text": "A photo of Jean-Francois Godbout talking with Joe Biden", "output": {"url": "image_0.png"}}, {"text": "A photo of Jean-Francois Godbout talking with Joe Biden", "output": {"url": "image_1.png"}}, {"text": "A photo of Jean-Francois Godbout talking with Joe Biden", "output": {"url": "image_2.png"}}, {"text": "A photo of Jean-Francois Godbout talking with Joe Biden", "output": {"url": "image_3.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of Jean-Francois Godbout"}
aarashfeizi/jean-francois-godbout
null
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-29T04:37:56+00:00
[]
[]
TAGS #diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #diffusers-training #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - aarashfeizi/jean-francois-godbout <Gallery /> ## Model description ### These are aarashfeizi/jean-francois-godbout LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - LoRA: download '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout.safetensors' here . - Place it on your 'models/Lora' folder. - On AUTOMATIC1111, load the LoRA by adding '<lora:/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout:1>' to your prompt. On ComfyUI just load it as a regular LoRA. - *Embeddings*: download '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb.safetensors' here . - Place it on it on your 'embeddings' folder - Use it by adding '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb' to your prompt. For example, 'A photo of Jean-Francois Godbout' (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the diffusers library For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept 'TOK' → use '<s0><s1>' in your prompt ## Details All Files & versions. The weights were trained using diffusers Advanced Dreambooth Training Script. LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
[ "# SDXL LoRA DreamBooth - aarashfeizi/jean-francois-godbout\n\n<Gallery />", "## Model description", "### These are aarashfeizi/jean-francois-godbout LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.", "## Download model", "### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke\n\n- LoRA: download '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout.safetensors' here .\n - Place it on your 'models/Lora' folder.\n - On AUTOMATIC1111, load the LoRA by adding '<lora:/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout:1>' to your prompt. On ComfyUI just load it as a regular LoRA.\n- *Embeddings*: download '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb.safetensors' here .\n - Place it on it on your 'embeddings' folder\n - Use it by adding '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb' to your prompt. For example, 'A photo of Jean-Francois Godbout'\n (you need both the LoRA and the embeddings as they were trained together for this LoRA)", "## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers", "## Trigger words\n\nTo trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:\n\nto trigger concept 'TOK' → use '<s0><s1>' in your prompt", "## Details\nAll Files & versions.\n\nThe weights were trained using diffusers Advanced Dreambooth Training Script.\n\nLoRA for the text encoder was enabled. False.\n\nPivotal tuning was enabled: True.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix." ]
[ "TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #diffusers-training #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - aarashfeizi/jean-francois-godbout\n\n<Gallery />", "## Model description", "### These are aarashfeizi/jean-francois-godbout LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.", "## Download model", "### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke\n\n- LoRA: download '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout.safetensors' here .\n - Place it on your 'models/Lora' folder.\n - On AUTOMATIC1111, load the LoRA by adding '<lora:/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout:1>' to your prompt. On ComfyUI just load it as a regular LoRA.\n- *Embeddings*: download '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb.safetensors' here .\n - Place it on it on your 'embeddings' folder\n - Use it by adding '/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout_emb' to your prompt. For example, 'A photo of Jean-Francois Godbout'\n (you need both the LoRA and the embeddings as they were trained together for this LoRA)", "## Use it with the diffusers library\n\n\n\nFor more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers", "## Trigger words\n\nTo trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:\n\nto trigger concept 'TOK' → use '<s0><s1>' in your prompt", "## Details\nAll Files & versions.\n\nThe weights were trained using diffusers Advanced Dreambooth Training Script.\n\nLoRA for the text encoder was enabled. False.\n\nPivotal tuning was enabled: True.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix." ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0428B2 This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8141 | 0.09 | 10 | 2.3918 | | 2.3705 | 0.18 | 20 | 2.2984 | | 2.1425 | 0.27 | 30 | 1.8693 | | 1.5052 | 0.36 | 40 | 0.9945 | | 0.6052 | 0.45 | 50 | 0.2390 | | 0.2304 | 0.54 | 60 | 0.1544 | | 0.1666 | 0.63 | 70 | 0.1506 | | 0.1633 | 0.73 | 80 | 0.1514 | | 0.1444 | 0.82 | 90 | 0.1481 | | 0.1551 | 0.91 | 100 | 0.1476 | | 0.1575 | 1.0 | 110 | 0.1517 | | 0.1455 | 1.09 | 120 | 0.1483 | | 0.1724 | 1.18 | 130 | 0.1493 | | 0.1558 | 1.27 | 140 | 0.1484 | | 0.1604 | 1.36 | 150 | 0.1473 | | 0.1432 | 1.45 | 160 | 0.1488 | | 0.1446 | 1.54 | 170 | 0.1467 | | 0.1463 | 1.63 | 180 | 0.1470 | | 0.1554 | 1.72 | 190 | 0.1506 | | 0.1595 | 1.81 | 200 | 0.1480 | | 0.1462 | 1.9 | 210 | 0.1475 | | 0.1449 | 1.99 | 220 | 0.1494 | | 0.1464 | 2.08 | 230 | 0.1473 | | 0.1671 | 2.18 | 240 | 0.1473 | | 0.1437 | 2.27 | 250 | 0.1472 | | 0.1448 | 2.36 | 260 | 0.1478 | | 0.1508 | 2.45 | 270 | 0.1477 | | 0.152 | 2.54 | 280 | 0.1475 | | 0.1547 | 2.63 | 290 | 0.1476 | | 0.1447 | 2.72 | 300 | 0.1474 | | 0.1448 | 2.81 | 310 | 0.1472 | | 0.1646 | 2.9 | 320 | 0.1472 | | 0.1567 | 2.99 | 330 | 0.1472 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0428B2", "results": []}]}
Litzy619/O0428B2
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:38:32+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us
O0428B2 ======= This model is a fine-tuned version of allenai/OLMo-1B on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1472 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 32 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 32\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-allenai/OLMo-1B #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 32\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-code-llama This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 6 ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "distilled-code-llama", "results": []}]}
anudaw/distilled-code-llama
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:38:42+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# distilled-code-llama This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 6 ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# distilled-code-llama\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 6", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# distilled-code-llama\n\nThis model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 6", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
null
# DavidAU/LWM-Text-1M-Q8_0-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-1M`](https://huggingface.co/LargeWorldModel/LWM-Text-1M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-1M) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-1M-Q8_0-GGUF --model lwm-text-1m.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-1M-Q8_0-GGUF --model lwm-text-1m.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-1m.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-1M-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:38:55+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-1M-Q8_0-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-1M' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-1M-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-1M' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-1M-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-1M' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Uploaded model - **Developed by:** baka999 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
baka999/ruozhiba_lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:39:23+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: baka999 - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: baka999\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: baka999\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
# DavidAU/LWM-Text-256K-Q8_0-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-256K`](https://huggingface.co/LargeWorldModel/LWM-Text-256K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-256K) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-256K-Q8_0-GGUF --model lwm-text-256k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-256K-Q8_0-GGUF --model lwm-text-256k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-256k.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-256K-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:39:43+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-256K-Q8_0-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-256K' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-256K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-256K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-256K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-256K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/LWM-Text-128K-Q8_0-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-128K`](https://huggingface.co/LargeWorldModel/LWM-Text-128K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-128K) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-128K-Q8_0-GGUF --model lwm-text-128k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-128K-Q8_0-GGUF --model lwm-text-128k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-128k.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-128K-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:40:24+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-128K-Q8_0-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-128K' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-128K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-128K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-128K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-128K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> House plans #trained on:WhiteCase ### Model Description <!-- Provide a longer summary of what this model is. --> ```python ## Model Details class TrainingConfig: image_size = 192 # the generated image resolution train_batch_size = 16 eval_batch_size = 16 # how many images to sample during evaluation num_epochs = 200 gradient_accumulation_steps = 1 learning_rate = 1e-4 lr_warmup_steps = 500 save_image_epochs = 10 save_model_epochs = 30 mixed_precision = 'fp16' # `no` for float32, `fp16` for automatic mixed precision output_dir = 'ddpm-butterflies-128' # the model namy locally and on the HF Hub push_to_hub = True # whether to upload the saved model to the HF Hub hub_private_repo = False overwrite_output_dir = True # overwrite the old model when re-running the notebook seed = 0 config = TrainingConfig() ``` This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
uisikdag/robin
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "diffusers:DDPMPipeline", "region:us" ]
null
2024-04-29T04:40:28+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #diffusers-DDPMPipeline #region-us
# Model Card for Model ID House plans #trained on:WhiteCase ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID\n\n\n\nHouse plans", "### Model Description\n\n\n\n\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #diffusers-DDPMPipeline #region-us \n", "# Model Card for Model ID\n\n\n\nHouse plans", "### Model Description\n\n\n\n\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** **விபின்** - **Model type:** T5-small - **Language(s) (NLP):** English - **License:** Apache 2.0 license - **Finetuned from model [optional]:** T5-small model ## Uses This model aims to respond with extractive and abstractive keyphrases for the given content. Kindly use "find keyphrase: " as the task prefix prompt to get the desired outputs. ## Bias, Risks, and Limitations This model response is based on the inputs given to it. So if any Harmful sentences given to this model, it will respond according to that. ## How to Get Started with the Model ``` from transformers import T5Tokenizer, T5ForConditionalGeneration import torch model_dir = "rv2307/keyphrase-abstraction-t5-small" tokenizer = T5Tokenizer.from_pretrained(model_dir) model = T5ForConditionalGeneration.from_pretrained(model_dir, torch_dtype=torch.bfloat16) device = "cuda" model.to(device) def generate(text): text = "find keyphrase: " + text inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') inputs = {k:v.to(model.device) for k,v in inputs.items()} with torch.no_grad(): outputs = model.generate( inputs['input_ids'], attention_mask=inputs['attention_mask'], max_length=100, use_cache=True ) output_list = tokenizer.decode(outputs[0],skip_special_tokens=True) return output_list content = "Use of BICs by businesses has been recommended by the Task Force on Nature-related Financial Disclosures[2] and the first provider of BICs for sale is Botanic Gardens Conservation International (BGCI). The credits are generated by BGCI's international member organisations by rebuilding the populations of tree species at high risk of extinction under the IUCN Red List methodology.[3]" outputs = generate(content) print(outputs) """ [ "BICs for businesses", "Task Force on Naturerelated Financial Disclosures", "Botanic Gardens Conservation International (BGCI)", "Rebuilding tree species at high risk", "IUCN Red List methodology", "Credits generated by BGCI", "International member organisations" ] """ ``` ## Training Details ### Training Data Mostly used open source datasets for these tasks, which are already available on the huggingface. ### Training Procedure This model has been fine tuned for 6 epochs with 40k datasets collected from the internet. ### Results ``` Epoch Training Loss Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len 1 0.105800 0.087497 43.840900 19.029900 40.303200 40.320300 16.306200 2 0.097600 0.081029 46.335000 21.246800 42.377400 42.387500 16.404900 3 0.091800 0.077546 47.721200 22.467200 43.622400 43.632000 16.308200 4 0.087600 0.075441 48.633700 23.351300 44.493800 44.504300 16.359000 5 0.088200 0.074088 48.977500 23.747000 44.804900 44.813200 16.300500 6 0.084900 0.073381 49.347300 24.029500 45.097100 45.108300 16.332600 ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "metrics": ["rouge", "bleu"]}
rv2307/keyphrase-abstraction-t5-small
null
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:41:24+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #t5 #text2text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using this raw template. ## Model Details ### Model Description - Developed by: விபின் - Model type: T5-small - Language(s) (NLP): English - License: Apache 2.0 license - Finetuned from model [optional]: T5-small model ## Uses This model aims to respond with extractive and abstractive keyphrases for the given content. Kindly use "find keyphrase: " as the task prefix prompt to get the desired outputs. ## Bias, Risks, and Limitations This model response is based on the inputs given to it. So if any Harmful sentences given to this model, it will respond according to that. ## How to Get Started with the Model ## Training Details ### Training Data Mostly used open source datasets for these tasks, which are already available on the huggingface. ### Training Procedure This model has been fine tuned for 6 epochs with 40k datasets collected from the internet. ### Results
[ "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: விபின்\n- Model type: T5-small \n- Language(s) (NLP): English\n- License: Apache 2.0 license\n- Finetuned from model [optional]: T5-small model", "## Uses\n\nThis model aims to respond with extractive and abstractive keyphrases for the given content. Kindly use \"find keyphrase: \" as the task prefix prompt to get the desired outputs.", "## Bias, Risks, and Limitations\n\nThis model response is based on the inputs given to it. So if any Harmful sentences given to this model, it will respond according to that.", "## How to Get Started with the Model", "## Training Details", "### Training Data\n\nMostly used open source datasets for these tasks, which are already available on the huggingface.", "### Training Procedure\n\nThis model has been fine tuned for 6 epochs with 40k datasets collected from the internet.", "### Results" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: விபின்\n- Model type: T5-small \n- Language(s) (NLP): English\n- License: Apache 2.0 license\n- Finetuned from model [optional]: T5-small model", "## Uses\n\nThis model aims to respond with extractive and abstractive keyphrases for the given content. Kindly use \"find keyphrase: \" as the task prefix prompt to get the desired outputs.", "## Bias, Risks, and Limitations\n\nThis model response is based on the inputs given to it. So if any Harmful sentences given to this model, it will respond according to that.", "## How to Get Started with the Model", "## Training Details", "### Training Data\n\nMostly used open source datasets for these tasks, which are already available on the huggingface.", "### Training Procedure\n\nThis model has been fine tuned for 6 epochs with 40k datasets collected from the internet.", "### Results" ]
text-generation
transformers
Quantizations of https://huggingface.co/mergekit-community/HX-Mistral-3B_v0.1 # From original readme # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: linear slices: - sources: - layer_range: [0, 16] # Assuming the first half of the model is more general and can be reduced more model: mistralai/Mistral-7B-Instruct-v0.2 parameters: weight: 0.5 # Reduce the weight of the first half to make room for the second half - layer_range: [16, 32] # Assuming the second half of the model is more specialized and can be reduced less model: mistralai/Mistral-7B-Instruct-v0.2 parameters: weight: 0.5 # Maintain the weight of the second half ```
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "HX-Mistral-3B_v0.1"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/HX-Mistral-3B_v0.1-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "HX-Mistral-3B_v0.1", "text-generation", "en", "arxiv:2203.05482", "license:other", "region:us" ]
null
2024-04-29T04:41:45+00:00
[ "2203.05482" ]
[ "en" ]
TAGS #transformers #gguf #imatrix #HX-Mistral-3B_v0.1 #text-generation #en #arxiv-2203.05482 #license-other #region-us
Quantizations of URL # From original readme # merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the linear merge method. ### Models Merged The following models were included in the merge: * mistralai/Mistral-7B-Instruct-v0.2 ### Configuration The following YAML configuration was used to produce this model:
[ "# From original readme", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the linear merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #gguf #imatrix #HX-Mistral-3B_v0.1 #text-generation #en #arxiv-2203.05482 #license-other #region-us \n", "# From original readme", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the linear merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hbin0701/Llama_3b_MATH_FT_checkpoint-1200
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:41:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/alpindale/miquella-120b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/miquella-120b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 31.2 | | | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 34.7 | | | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ2_S.gguf) | i1-IQ2_S | 36.5 | | | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ2_M.gguf) | i1-IQ2_M | 39.7 | | | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q2_K.gguf) | i1-Q2_K | 43.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 45.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 48.2 | | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 50.8 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 51.0 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 52.7 | | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 56.7 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 61.8 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 62.9 | | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 66.7 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 66.9 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 70.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 81.1 | | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 83.3 | | | [PART 1](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquella-120b-i1-GGUF/resolve/main/miquella-120b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 96.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "alpindale/miquella-120b", "quantized_by": "mradermacher"}
mradermacher/miquella-120b-i1-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:alpindale/miquella-120b", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:41:56+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-alpindale/miquella-120b #endpoints_compatible #region-us
About ----- weighted/imatrix quants of URL static quants are available at URL Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-alpindale/miquella-120b #endpoints_compatible #region-us \n" ]
text-to-image
diffusers
## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline import torch model_id = "fr4b/compose-v2" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "YOUR PROMPT" image = pipe(prompt).images[0] image.save("image.png") ```
{}
nextab/Compose-v2.0
null
[ "diffusers", "safetensors", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-29T04:42:09+00:00
[]
[]
TAGS #diffusers #safetensors #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
## Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion.
[ "## Diffusers\n\nThis model can be used just like any other Stable Diffusion model. For more information,\nplease have a look at the Stable Diffusion." ]
[ "TAGS\n#diffusers #safetensors #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "## Diffusers\n\nThis model can be used just like any other Stable Diffusion model. For more information,\nplease have a look at the Stable Diffusion." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ZeroWater93/fast_whisper-large-v2-korea-common_17
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:43:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/LWM-Text-32K-Q8_0-GGUF This model was converted to GGUF format from [`LargeWorldModel/LWM-Text-32K`](https://huggingface.co/LargeWorldModel/LWM-Text-32K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LargeWorldModel/LWM-Text-32K) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/LWM-Text-32K-Q8_0-GGUF --model lwm-text-32k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/LWM-Text-32K-Q8_0-GGUF --model lwm-text-32k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m lwm-text-32k.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"], "inference": false}
DavidAU/LWM-Text-32K-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-29T04:44:18+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/LWM-Text-32K-Q8_0-GGUF This model was converted to GGUF format from 'LargeWorldModel/LWM-Text-32K' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/LWM-Text-32K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-32K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/LWM-Text-32K-Q8_0-GGUF\nThis model was converted to GGUF format from 'LargeWorldModel/LWM-Text-32K' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
image-text-to-text
transformers
### TinyLLaVA We trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as [TinyLLaVA](https://github.com/DLCV-BUAA/TinyLLaVABench). For the Language and Vision models, we chose [OpenELM-450M-Instruct](apple/OpenELM-450M-Instruct) and [clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16), respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as [LLaVA](https://github.com/haotian-liu/LLaVA). During testing, we found that [TinyLLaVA-0.55B](https://huggingface.co/jiajunlong/TinyLLaVA-0.55B) exhibited significantly faster inference speed on CPU compared to [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) ### Usage 1. you need to download the generate file "generate_model.py". 2. running the following command: ```bash python generate_model --model jiajunlong/TinyLLaVA-0.89B --prompt 'you want to ask' --image '/path/to/related/image' ``` or execute the following test code: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from generate_model import * model = AutoModelForCausalLM.from_pretrained("jiajunlong/TinyLLaVA-0.55B", trust_remote_code=True) config = model.config tokenizer = AutoTokenizer.from_pretrained("jiajunlong/TinyLLaVA-0.55B", use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side) prompt="you want to ask" image="/path/to/related/image" output_text, genertaion_time = generate(prompt=prompt, image=image, model=model, tokenizer=tokenizer) print_txt = ( f'\r\n{"=" * os.get_terminal_size().columns}\r\n' '\033[1m Prompt + Generated Output\033[0m\r\n' f'{"-" * os.get_terminal_size().columns}\r\n' f'{output_text}\r\n' f'{"-" * os.get_terminal_size().columns}\r\n' '\r\nGeneration took' f'\033[1m\033[92m {round(genertaion_time, 2)} \033[0m' 'seconds.\r\n' ) print(print_txt) ``` ### Result | model_name | gqa | textvqa | sqa | vqav2 | MME | MMB | MM-VET | | :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | | [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) | 60.3 | 51.7 | 60.3 | 76.9 | 1276.5 | 55.2 | 25.8 | | [TinyLLaVA-0.55B](https://huggingface.co/jiajunlong/TinyLLaVA-0.89B) | 50.38 | 36.37 | 50.02 | 65.44 | 1056.69 | 26.29 | 15.4 |
{"license": "apache-2.0", "pipeline_tag": "image-text-to-text"}
jiajunlong/TinyLLaVA-0.55B
null
[ "transformers", "safetensors", "text-generation", "image-text-to-text", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
null
2024-04-29T04:44:54+00:00
[]
[]
TAGS #transformers #safetensors #text-generation #image-text-to-text #custom_code #license-apache-2.0 #autotrain_compatible #region-us
### TinyLLaVA We trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as TinyLLaVA. For the Language and Vision models, we chose OpenELM-450M-Instruct and clip-vit-base-patch16, respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as LLaVA. During testing, we found that TinyLLaVA-0.55B exhibited significantly faster inference speed on CPU compared to TinyLLaVA-1.5B ### Usage 1. you need to download the generate file "generate\_model.py". 2. running the following command: or execute the following test code: ### Result
[ "### TinyLLaVA\n\n\nWe trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as TinyLLaVA. For the Language and Vision models, we chose OpenELM-450M-Instruct and clip-vit-base-patch16, respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as LLaVA. During testing, we found that TinyLLaVA-0.55B exhibited significantly faster inference speed on CPU compared to TinyLLaVA-1.5B", "### Usage\n\n\n1. you need to download the generate file \"generate\\_model.py\".\n2. running the following command:\n\n\nor execute the following test code:", "### Result" ]
[ "TAGS\n#transformers #safetensors #text-generation #image-text-to-text #custom_code #license-apache-2.0 #autotrain_compatible #region-us \n", "### TinyLLaVA\n\n\nWe trained 1 model with fewer than 1B parameters using the TinyLLaVA approach, employing the same training settings as TinyLLaVA. For the Language and Vision models, we chose OpenELM-450M-Instruct and clip-vit-base-patch16, respectively. The Connector was configured with a 2-layer MLP. The dataset used for training is the save as LLaVA. During testing, we found that TinyLLaVA-0.55B exhibited significantly faster inference speed on CPU compared to TinyLLaVA-1.5B", "### Usage\n\n\n1. you need to download the generate file \"generate\\_model.py\".\n2. running the following command:\n\n\nor execute the following test code:", "### Result" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-mix-vsft-3 This model is a fine-tuned version of [HuggingFaceH4/vsft-llava-1.5-7b-hf-trl](https://huggingface.co/HuggingFaceH4/vsft-llava-1.5-7b-hf-trl) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "HuggingFaceH4/vsft-llava-1.5-7b-hf-trl", "model-index": [{"name": "llava-1.5-7b-hf-ft-mix-vsft-3", "results": []}]}
Salmoli/llava-1.5-7b-hf-ft-mix-vsft-3
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:HuggingFaceH4/vsft-llava-1.5-7b-hf-trl", "region:us" ]
null
2024-04-29T04:45:50+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-HuggingFaceH4/vsft-llava-1.5-7b-hf-trl #region-us
# llava-1.5-7b-hf-ft-mix-vsft-3 This model is a fine-tuned version of HuggingFaceH4/vsft-llava-1.5-7b-hf-trl on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "# llava-1.5-7b-hf-ft-mix-vsft-3\n\nThis model is a fine-tuned version of HuggingFaceH4/vsft-llava-1.5-7b-hf-trl on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-HuggingFaceH4/vsft-llava-1.5-7b-hf-trl #region-us \n", "# llava-1.5-7b-hf-ft-mix-vsft-3\n\nThis model is a fine-tuned version of HuggingFaceH4/vsft-llava-1.5-7b-hf-trl on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.4e-05\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/8w767z1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:46:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/843unq1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:47:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llamaduo_synth_ds_v0.1 This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the chansung/synth_ds dataset. It achieves the following results on the evaluation set: - Loss: 3.8292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - total_eval_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7837 | 0.9995 | 939 | 1.9731 | | 0.7509 | 2.0 | 1879 | 1.9719 | | 0.7086 | 2.9995 | 2817 | 2.0286 | | 0.6156 | 4.0 | 3757 | 2.1647 | | 0.4937 | 4.9995 | 4696 | 2.3686 | | 0.4075 | 6.0 | 5636 | 2.7269 | | 0.3395 | 6.9995 | 6575 | 3.1681 | | 0.2962 | 8.0 | 7515 | 3.6134 | | 0.284 | 8.9995 | 8454 | 3.8100 | | 0.2782 | 9.9957 | 9390 | 3.8292 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
{"license": "gemma", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["chansung/synth_ds"], "base_model": "google/gemma-7b", "model-index": [{"name": "llamaduo_synth_ds_v0.1", "results": []}]}
chansung/llamaduo_synth_ds_v0.1
null
[ "peft", "tensorboard", "safetensors", "gemma", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:chansung/synth_ds", "base_model:google/gemma-7b", "license:gemma", "4-bit", "region:us" ]
null
2024-04-29T04:47:35+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-chansung/synth_ds #base_model-google/gemma-7b #license-gemma #4-bit #region-us
llamaduo\_synth\_ds\_v0.1 ========================= This model is a fine-tuned version of google/gemma-7b on the chansung/synth\_ds dataset. It achieves the following results on the evaluation set: * Loss: 3.8292 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 2 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 3 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 12 * total\_eval\_batch\_size: 12 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.0 * Pytorch 2.2.2+cu121 * Datasets 2.14.6 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 3\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 12\n* total\\_eval\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #gemma #alignment-handbook #trl #sft #generated_from_trainer #dataset-chansung/synth_ds #base_model-google/gemma-7b #license-gemma #4-bit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 3\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 12\n* total\\_eval\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lunarsylph/stablecell_v48
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:48:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/Tiamat-7b-Q6_K-GGUF This model was converted to GGUF format from [`Gryphe/Tiamat-7b`](https://huggingface.co/Gryphe/Tiamat-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Gryphe/Tiamat-7b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Tiamat-7b-Q6_K-GGUF --model tiamat-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Tiamat-7b-Q6_K-GGUF --model tiamat-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiamat-7b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Tiamat-7b-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:49:10+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
# DavidAU/Tiamat-7b-Q6_K-GGUF This model was converted to GGUF format from 'Gryphe/Tiamat-7b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Tiamat-7b-Q6_K-GGUF\nThis model was converted to GGUF format from 'Gryphe/Tiamat-7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n", "# DavidAU/Tiamat-7b-Q6_K-GGUF\nThis model was converted to GGUF format from 'Gryphe/Tiamat-7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Model Details: - This model was created by finetuning the [unsloth/gemma-1.1-2b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-1.1-2b-it-bnb-4bit) model using the [coedit dataset](https://huggingface.co/datasets/grammarly/coedit) from Grammarly. - The finetuning was done following the fine-tuning notebook provided by Unsloth as a practice of finetuning using the coedit dataset. - The model was finetuned using the prompt format of the gemma-2b-it model. ``` <start_of_turn>user Fix grammar in this sentence: A notable number of Chinese factories make piratical products by copying foreign products.<end_of_turn> <start_of_turn>model A notable number of Chinese factories make pirated products by copying foreign products.<end_of_turn> ``` - The finetuning was done 2x faster by utilizing the Unsloth and Hugging Face's TRL library. # Limitations: The model was finetuned on a specific dataset (coedit) and may not generalize well to all Italian text generation tasks. Its performance may be limited compared to models trained on larger and more diverse datasets. # Uploaded model - **Developed by:** gnokit - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-1.1-2b-it-bnb-4bit"}
gnokit/gemma_2b_coedit
null
[ "transformers", "safetensors", "gguf", "gemma", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/gemma-1.1-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:49:37+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #gguf #gemma #text-generation-inference #unsloth #trl #en #base_model-unsloth/gemma-1.1-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Model Details: - This model was created by finetuning the unsloth/gemma-1.1-2b-it-bnb-4bit model using the coedit dataset from Grammarly. - The finetuning was done following the fine-tuning notebook provided by Unsloth as a practice of finetuning using the coedit dataset. - The model was finetuned using the prompt format of the gemma-2b-it model. - The finetuning was done 2x faster by utilizing the Unsloth and Hugging Face's TRL library. # Limitations: The model was finetuned on a specific dataset (coedit) and may not generalize well to all Italian text generation tasks. Its performance may be limited compared to models trained on larger and more diverse datasets. # Uploaded model - Developed by: gnokit - License: apache-2.0 - Finetuned from model : unsloth/gemma-1.1-2b-it-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Model Details:\n\n- This model was created by finetuning the unsloth/gemma-1.1-2b-it-bnb-4bit model using the coedit dataset from Grammarly.\n- The finetuning was done following the fine-tuning notebook provided by Unsloth as a practice of finetuning using the coedit dataset.\n- The model was finetuned using the prompt format of the gemma-2b-it model.\n\n- The finetuning was done 2x faster by utilizing the Unsloth and Hugging Face's TRL library.", "# Limitations:\nThe model was finetuned on a specific dataset (coedit) and may not generalize well to all Italian text generation tasks. Its performance may be limited compared to models trained on larger and more diverse datasets.", "# Uploaded model\n\n- Developed by: gnokit\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-1.1-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #gguf #gemma #text-generation-inference #unsloth #trl #en #base_model-unsloth/gemma-1.1-2b-it-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Model Details:\n\n- This model was created by finetuning the unsloth/gemma-1.1-2b-it-bnb-4bit model using the coedit dataset from Grammarly.\n- The finetuning was done following the fine-tuning notebook provided by Unsloth as a practice of finetuning using the coedit dataset.\n- The model was finetuned using the prompt format of the gemma-2b-it model.\n\n- The finetuning was done 2x faster by utilizing the Unsloth and Hugging Face's TRL library.", "# Limitations:\nThe model was finetuned on a specific dataset (coedit) and may not generalize well to all Italian text generation tasks. Its performance may be limited compared to models trained on larger and more diverse datasets.", "# Uploaded model\n\n- Developed by: gnokit\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-1.1-2b-it-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
# DavidAU/Pantheon-10.7b-Q6_K-GGUF This model was converted to GGUF format from [`Gryphe/Pantheon-10.7b`](https://huggingface.co/Gryphe/Pantheon-10.7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Gryphe/Pantheon-10.7b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Pantheon-10.7b-Q6_K-GGUF --model pantheon-10.7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Pantheon-10.7b-Q6_K-GGUF --model pantheon-10.7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pantheon-10.7b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Pantheon-10.7b-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:50:27+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
# DavidAU/Pantheon-10.7b-Q6_K-GGUF This model was converted to GGUF format from 'Gryphe/Pantheon-10.7b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Pantheon-10.7b-Q6_K-GGUF\nThis model was converted to GGUF format from 'Gryphe/Pantheon-10.7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n", "# DavidAU/Pantheon-10.7b-Q6_K-GGUF\nThis model was converted to GGUF format from 'Gryphe/Pantheon-10.7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fast Whisper Small Ko - Youngsu Jo This model is a fine-tuned version of [openai/fast_whisper-Large_v2](https://huggingface.co/openai/fast_whisper-Large_v2) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6398 | 1.1236 | 100 | 0.2606 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["ko"], "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_17_0"], "base_model": "openai/whisper-large-v2", "model-index": [{"name": "Fast Whisper Small Ko - Youngsu Jo", "results": []}]}
ZeroWater93/test
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "ko", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-large-v2", "region:us" ]
null
2024-04-29T04:51:26+00:00
[]
[ "ko" ]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #ko #dataset-mozilla-foundation/common_voice_17_0 #base_model-openai/whisper-large-v2 #region-us
Fast Whisper Small Ko - Youngsu Jo ================================== This model is a fine-tuned version of openai/fast\_whisper-Large\_v2 on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.2606 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 50 * training\_steps: 100 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.1.dev0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #ko #dataset-mozilla-foundation/common_voice_17_0 #base_model-openai/whisper-large-v2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
setfit
# SetFit with meedan/paraphrase-filipino-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [bsen26/eyeR-classification-multi-label-category1](https://huggingface.co/datasets/bsen26/eyeR-classification-multi-label-category1) dataset that can be used for Text Classification. This SetFit model uses [meedan/paraphrase-filipino-mpnet-base-v2](https://huggingface.co/meedan/paraphrase-filipino-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [meedan/paraphrase-filipino-mpnet-base-v2](https://huggingface.co/meedan/paraphrase-filipino-mpnet-base-v2) - **Classification head:** a OneVsRestClassifier instance - **Maximum Sequence Length:** 128 tokens <!-- - **Number of Classes:** Unknown --> - **Training Dataset:** [bsen26/eyeR-classification-multi-label-category1](https://huggingface.co/datasets/bsen26/eyeR-classification-multi-label-category1) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.6977 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("bsen26/eyeR-category1-multilabel") # Run inference preds = model("great ??????") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 11.2634 | 39 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0018 | 1 | 0.3366 | - | | 0.0893 | 50 | 0.1341 | - | | 0.1786 | 100 | 0.1109 | - | | 0.2679 | 150 | 0.0181 | - | | 0.3571 | 200 | 0.0073 | - | | 0.4464 | 250 | 0.047 | - | | 0.5357 | 300 | 0.0031 | - | | 0.625 | 350 | 0.0023 | - | | 0.7143 | 400 | 0.0008 | - | | 0.8036 | 450 | 0.0151 | - | | 0.8929 | 500 | 0.0007 | - | | 0.9821 | 550 | 0.0014 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.0 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "datasets": ["bsen26/eyeR-classification-multi-label-category1"], "metrics": ["accuracy"], "base_model": "meedan/paraphrase-filipino-mpnet-base-v2", "widget": [{"text": "Fries werent filled fully"}, {"text": "I ordered 2x 2pcs chicken. I got 1 2pcs chicken and 1 chicken spaghetti. I hope you\u2019ll do something about this since 2pcs chicken is worth more than chicken spaghetti. This is unacceptable."}, {"text": "great ??????"}, {"text": "I specifically requesting, not leg part"}, {"text": "Coke Float does not look and taste as coke float. Seems like no ice cream is added or coz it already melt, and taste is more ice than coke I have waited for 48 minutes. Worst coke float ever"}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit with meedan/paraphrase-filipino-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "bsen26/eyeR-classification-multi-label-category1", "type": "bsen26/eyeR-classification-multi-label-category1", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.6977428851815506, "name": "Accuracy"}]}]}]}
bsen26/eyeR-category1-multilabel
null
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "dataset:bsen26/eyeR-classification-multi-label-category1", "arxiv:2209.11055", "base_model:meedan/paraphrase-filipino-mpnet-base-v2", "model-index", "region:us" ]
null
2024-04-29T04:52:14+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #xlm-roberta #sentence-transformers #text-classification #generated_from_setfit_trainer #dataset-bsen26/eyeR-classification-multi-label-category1 #arxiv-2209.11055 #base_model-meedan/paraphrase-filipino-mpnet-base-v2 #model-index #region-us
SetFit with meedan/paraphrase-filipino-mpnet-base-v2 ==================================================== This is a SetFit model trained on the bsen26/eyeR-classification-multi-label-category1 dataset that can be used for Text Classification. This SetFit model uses meedan/paraphrase-filipino-mpnet-base-v2 as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: meedan/paraphrase-filipino-mpnet-base-v2 * Classification head: a OneVsRestClassifier instance * Maximum Sequence Length: 128 tokens * Training Dataset: bsen26/eyeR-classification-multi-label-category1 ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (16, 16) * num\_epochs: (1, 1) * max\_steps: -1 * sampling\_strategy: oversampling * num\_iterations: 20 * body\_learning\_rate: (2e-05, 2e-05) * head\_learning\_rate: 2e-05 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: False ### Training Results ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.7.0 * Transformers: 4.40.0 * PyTorch: 2.2.1+cu121 * Datasets: 2.19.0 * Tokenizers: 0.19.1 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: meedan/paraphrase-filipino-mpnet-base-v2\n* Classification head: a OneVsRestClassifier instance\n* Maximum Sequence Length: 128 tokens\n* Training Dataset: bsen26/eyeR-classification-multi-label-category1", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #xlm-roberta #sentence-transformers #text-classification #generated_from_setfit_trainer #dataset-bsen26/eyeR-classification-multi-label-category1 #arxiv-2209.11055 #base_model-meedan/paraphrase-filipino-mpnet-base-v2 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: meedan/paraphrase-filipino-mpnet-base-v2\n* Classification head: a OneVsRestClassifier instance\n* Maximum Sequence Length: 128 tokens\n* Training Dataset: bsen26/eyeR-classification-multi-label-category1", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (16, 16)\n* num\\_epochs: (1, 1)\n* max\\_steps: -1\n* sampling\\_strategy: oversampling\n* num\\_iterations: 20\n* body\\_learning\\_rate: (2e-05, 2e-05)\n* head\\_learning\\_rate: 2e-05\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: False", "### Training Results", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.0\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small CN - my voice This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the my_CN_ds dataset. It achieves the following results on the evaluation set: - Loss: 0.7879 - Wer: 100.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-----:| | 0.0 | 100.0 | 100 | 0.7750 | 100.0 | | 0.0 | 200.0 | 200 | 0.7819 | 100.0 | | 0.0 | 300.0 | 300 | 0.7860 | 100.0 | | 0.0 | 400.0 | 400 | 0.7879 | 100.0 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["cn"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Svetlana0303/my_CN_ds"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small CN - my voice", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "my_CN_ds", "type": "Svetlana0303/my_CN_ds", "args": "split: test"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Wer"}]}]}]}
Svetlana0303/whisper-small-cn
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "cn", "dataset:Svetlana0303/my_CN_ds", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:52:22+00:00
[]
[ "cn" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #cn #dataset-Svetlana0303/my_CN_ds #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Small CN - my voice =========================== This model is a fine-tuned version of openai/whisper-small on the my\_CN\_ds dataset. It achieves the following results on the evaluation set: * Loss: 0.7879 * Wer: 100.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 50 * training\_steps: 400 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 400\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #cn #dataset-Svetlana0303/my_CN_ds #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 400\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
andersonbcdefg/tiny-emb-2024-04-29_04-53-53
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T04:53:53+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/hitao19
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:54:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FlanT5 XL Grammarly CoEdit: Text Editing by Task-Specific Instruction Tuning This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on the coedit dataset. It achieves the following results on the evaluation set: - Loss: 0.5379 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6654 | 1.0 | 2158 | 0.5379 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer", "text2text-generation"], "datasets": ["grammarly/coedit"], "base_model": "google/flan-t5-xl", "model-index": [{"name": "FlanT5 XL Grammarly CoEdit: Text Editing by Task-Specific Instruction Tuning", "results": []}]}
pranay-j/flan-t5-coedit-xl
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "text2text-generation", "en", "dataset:grammarly/coedit", "base_model:google/flan-t5-xl", "license:apache-2.0", "region:us" ]
null
2024-04-29T04:56:41+00:00
[]
[ "en" ]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #text2text-generation #en #dataset-grammarly/coedit #base_model-google/flan-t5-xl #license-apache-2.0 #region-us
FlanT5 XL Grammarly CoEdit: Text Editing by Task-Specific Instruction Tuning ============================================================================ This model is a fine-tuned version of google/flan-t5-xl on the coedit dataset. It achieves the following results on the evaluation set: * Loss: 0.5379 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #text2text-generation #en #dataset-grammarly/coedit #base_model-google/flan-t5-xl #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7b-dpo-full-sft-wo-live_qa This model is a fine-tuned version of [Minbyul/llama2-7b-wo-live_qa-sft](https://huggingface.co/Minbyul/llama2-7b-wo-live_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.4649 - Rewards/chosen: -0.1737 - Rewards/rejected: -0.6052 - Rewards/accuracies: 0.9167 - Rewards/margins: 0.4315 - Logps/rejected: -682.3730 - Logps/chosen: -361.0932 - Logits/rejected: -0.6401 - Logits/chosen: -0.8344 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.3165 | 0.74 | 100 | 0.5069 | -0.1530 | -0.4700 | 0.875 | 0.3170 | -668.8459 | -359.0135 | -0.6288 | -0.8302 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/llama2-7b-wo-live_qa-sft", "model-index": [{"name": "llama2-7b-dpo-full-sft-wo-live_qa", "results": []}]}
Minbyul/llama2-7b-dpo-full-sft-wo-live_qa
null
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/llama2-7b-wo-live_qa-sft", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T04:56:43+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/llama2-7b-wo-live_qa-sft #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
llama2-7b-dpo-full-sft-wo-live\_qa ================================== This model is a fine-tuned version of Minbyul/llama2-7b-wo-live\_qa-sft on the HuggingFaceH4/ultrafeedback\_binarized dataset. It achieves the following results on the evaluation set: * Loss: 0.4649 * Rewards/chosen: -0.1737 * Rewards/rejected: -0.6052 * Rewards/accuracies: 0.9167 * Rewards/margins: 0.4315 * Logps/rejected: -682.3730 * Logps/chosen: -361.0932 * Logits/rejected: -0.6401 * Logits/chosen: -0.8344 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-07 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 64 * total\_eval\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.39.0.dev0 * Pytorch 2.1.2 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-Minbyul/llama2-7b-wo-live_qa-sft #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-07\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
null
null
# DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF This model was converted to GGUF format from [`Gryphe/Tiamat-7b-1.1-DPO`](https://huggingface.co/Gryphe/Tiamat-7b-1.1-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Gryphe/Tiamat-7b-1.1-DPO) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF --model tiamat-7b-1.1-dpo.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF --model tiamat-7b-1.1-dpo.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiamat-7b-1.1-dpo.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:apache-2.0", "region:us" ]
null
2024-04-29T05:00:10+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
# DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF This model was converted to GGUF format from 'Gryphe/Tiamat-7b-1.1-DPO' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF\nThis model was converted to GGUF format from 'Gryphe/Tiamat-7b-1.1-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n", "# DavidAU/Tiamat-7b-1.1-DPO-Q8_0-GGUF\nThis model was converted to GGUF format from 'Gryphe/Tiamat-7b-1.1-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "huggyllama/llama-7b"}
shrenikb/hftestepoch2id2
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:huggyllama/llama-7b", "region:us" ]
null
2024-04-29T05:01:41+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-huggyllama/llama-7b #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-huggyllama/llama-7b #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
transformers
# Uploaded model - **Developed by:** ahmedsamirio - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
ahmedsamirio/llama-3-8b-instruct-alpaca-ar
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T05:01:44+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: ahmedsamirio - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: ahmedsamirio\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: ahmedsamirio\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-poison-20p This model is a fine-tuned version of [Undi95/Meta-Llama-3-8B-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-hf) on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 1.0 | 1350 | nan | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "other", "library_name": "peft", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrachat_200k"], "base_model": "Undi95/Meta-Llama-3-8B-hf", "model-index": [{"name": "llama3-poison-20p", "results": []}]}
terry69/llama3-poison-20p
null
[ "peft", "tensorboard", "safetensors", "llama", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:HuggingFaceH4/ultrachat_200k", "base_model:Undi95/Meta-Llama-3-8B-hf", "license:other", "region:us" ]
null
2024-04-29T05:03:23+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-Undi95/Meta-Llama-3-8B-hf #license-other #region-us
llama3-poison-20p ================= This model is a fine-tuned version of Undi95/Meta-Llama-3-8B-hf on the HuggingFaceH4/ultrachat\_200k dataset. It achieves the following results on the evaluation set: * Loss: nan Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * total\_eval\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.39.0.dev0 * Pytorch 2.2.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #llama #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/ultrachat_200k #base_model-Undi95/Meta-Llama-3-8B-hf #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
null
null
Imatrix compressions of FP Merge of "D_AU-Orac-13B-Tiefighter-slerp". "Imatrix Plus" is an upgraded form of Imatrix which using full precision for specific parts of the compression. As a result all compressions will be slightly larger in size than standard 13B compressions. This method results in a higher quality model, especially at lower compressions. This method is applied across all compressions from IQ1 to Q8. Even IQ1_S - the most compressed verison - works well, however IQ4/Q4 are suggested as minimums for quality. Highest quality will be Q6/Q8. How big a difference is this merge? Orginal Tiefighter IQ1_S (with imatrix enhancements) tested at a perplexity of: PPL = 17.2589 +/- 0.12466* Tiefighter Orca 2 IQ1_S (with imatrix enhancements) tested at a perplexity of: PPL = 12.6985 +/- 0.09106* Note that LOWER perplexity is better. * Tested using llamacpp, perplexity.exe with wiki.raw. In addition the Imatrix file used to "fix" the compressed files post compression resulted in over 2 whole points lower perplexity at IQ1_S vs some of the other "Imatrix" files currently in use. Orginal Tiefighter IQ1_S (with imatrix enhancements) tested with a different "Imatrix" repair file at a perplexity of: PPL = 19.6355 +/- 0.14435 Likewise the merge itself affected perplexity too. This merge was an experiment to test already established Roleplay, Fiction and Story generation of "Tiefighter" with a some of "Orca 2"'s qualities. Additional merge experiements are in progress. For Imatrix plus this was a test of high precision in specific areas of the model leading to a slightly larger compressed file. In addition the Imatrix process itself used a larger "calibration" file than standard to further enhance quality. The process added appoximately 310 MB to each compressed file. A blank or standard Alpaca Template for text generation will work. Currently "CHATML" is untested. Context length: 4096. Please see the orginal model card for specific details of use, additional credits and tips: [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [0, 40] - model: microsoft/Orca-2-13b layer_range: [0, 40] merge_method: slerp base_model: microsoft/Orca-2-13b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"language": ["en"], "license": "mit"}
DavidAU/D_AU-Orac-13B-Tiefighter-slerp-imat-plus-GGUF
null
[ "gguf", "en", "license:mit", "region:us" ]
null
2024-04-29T05:03:30+00:00
[]
[ "en" ]
TAGS #gguf #en #license-mit #region-us
Imatrix compressions of FP Merge of "D_AU-Orac-13B-Tiefighter-slerp". "Imatrix Plus" is an upgraded form of Imatrix which using full precision for specific parts of the compression. As a result all compressions will be slightly larger in size than standard 13B compressions. This method results in a higher quality model, especially at lower compressions. This method is applied across all compressions from IQ1 to Q8. Even IQ1_S - the most compressed verison - works well, however IQ4/Q4 are suggested as minimums for quality. Highest quality will be Q6/Q8. How big a difference is this merge? Orginal Tiefighter IQ1_S (with imatrix enhancements) tested at a perplexity of: PPL = 17.2589 +/- 0.12466* Tiefighter Orca 2 IQ1_S (with imatrix enhancements) tested at a perplexity of: PPL = 12.6985 +/- 0.09106* Note that LOWER perplexity is better. * Tested using llamacpp, URL with URL. In addition the Imatrix file used to "fix" the compressed files post compression resulted in over 2 whole points lower perplexity at IQ1_S vs some of the other "Imatrix" files currently in use. Orginal Tiefighter IQ1_S (with imatrix enhancements) tested with a different "Imatrix" repair file at a perplexity of: PPL = 19.6355 +/- 0.14435 Likewise the merge itself affected perplexity too. This merge was an experiment to test already established Roleplay, Fiction and Story generation of "Tiefighter" with a some of "Orca 2"'s qualities. Additional merge experiements are in progress. For Imatrix plus this was a test of high precision in specific areas of the model leading to a slightly larger compressed file. In addition the Imatrix process itself used a larger "calibration" file than standard to further enhance quality. The process added appoximately 310 MB to each compressed file. A blank or standard Alpaca Template for text generation will work. Currently "CHATML" is untested. Context length: 4096. Please see the orginal model card for specific details of use, additional credits and tips: KoboldAI/LLaMA2-13B-Tiefighter # merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * microsoft/Orca-2-13b * KoboldAI/LLaMA2-13B-Tiefighter ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* microsoft/Orca-2-13b\n* KoboldAI/LLaMA2-13B-Tiefighter", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#gguf #en #license-mit #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* microsoft/Orca-2-13b\n* KoboldAI/LLaMA2-13B-Tiefighter", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
# VILA Model Card ## Model details **Model type:** VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge. **Model date:** VILA1.5-13b was trained in May 2024. **Paper or resources for more information:** https://github.com/Efficient-Large-Model/VILA ``` @misc{lin2023vila, title={VILA: On Pre-training for Visual Language Models}, author={Ji Lin and Hongxu Yin and Wei Ping and Yao Lu and Pavlo Molchanov and Andrew Tao and Huizi Mao and Jan Kautz and Mohammad Shoeybi and Song Han}, year={2023}, eprint={2312.07533}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## License - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file. - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en). - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms: - [Model License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training. **Where to send questions or comments about the model:** https://github.com/Efficient-Large-Model/VILA/issues ## Intended use **Primary intended uses:** The primary use of VILA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset See [Dataset Preparation](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/README.md) for more details. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
{"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["VILA", "VLM"], "pipeline_tag": "text-generation"}
Efficient-Large-Model/VILA1.5-13b
null
[ "transformers", "safetensors", "llava_llama", "VILA", "VLM", "text-generation", "arxiv:2312.07533", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T05:03:57+00:00
[ "2312.07533" ]
[]
TAGS #transformers #safetensors #llava_llama #VILA #VLM #text-generation #arxiv-2312.07533 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
# VILA Model Card ## Model details Model type: VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge. Model date: VILA1.5-13b was trained in May 2024. Paper or resources for more information: URL ## License - The code is released under the Apache 2.0 license as found in the LICENSE file. - The pretrained weights are released under the CC-BY-NC-SA-4.0 license. - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms: - Model License of LLaMA - Terms of Use of the data generated by OpenAI - Dataset Licenses for each one used during training. Where to send questions or comments about the model: URL ## Intended use Primary intended uses: The primary use of VILA is research on large multimodal models and chatbots. Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset See Dataset Preparation for more details. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
[ "# VILA Model Card", "## Model details\n\nModel type:\nVILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge.\n\nModel date:\nVILA1.5-13b was trained in May 2024.\n\nPaper or resources for more information:\nURL", "## License\n- The code is released under the Apache 2.0 license as found in the LICENSE file.\n- The pretrained weights are released under the CC-BY-NC-SA-4.0 license.\n- The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:\n - Model License of LLaMA\n - Terms of Use of the data generated by OpenAI\n - Dataset Licenses for each one used during training.\n\nWhere to send questions or comments about the model:\nURL", "## Intended use\nPrimary intended uses:\nThe primary use of VILA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.", "## Training dataset\nSee Dataset Preparation for more details.", "## Evaluation dataset\nA collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs." ]
[ "TAGS\n#transformers #safetensors #llava_llama #VILA #VLM #text-generation #arxiv-2312.07533 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "# VILA Model Card", "## Model details\n\nModel type:\nVILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge.\n\nModel date:\nVILA1.5-13b was trained in May 2024.\n\nPaper or resources for more information:\nURL", "## License\n- The code is released under the Apache 2.0 license as found in the LICENSE file.\n- The pretrained weights are released under the CC-BY-NC-SA-4.0 license.\n- The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:\n - Model License of LLaMA\n - Terms of Use of the data generated by OpenAI\n - Dataset Licenses for each one used during training.\n\nWhere to send questions or comments about the model:\nURL", "## Intended use\nPrimary intended uses:\nThe primary use of VILA is research on large multimodal models and chatbots.\n\nPrimary intended users:\nThe primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.", "## Training dataset\nSee Dataset Preparation for more details.", "## Evaluation dataset\nA collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs." ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - yuffish/mug-segmented This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "inference": true, "base_model": "stabilityai/stable-diffusion-2-1-base", "instance_prompt": "a photo of sks object"}
yuffish/mug-segmented
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-29T05:05:45+00:00
[]
[]
TAGS #diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# DreamBooth - yuffish/mug-segmented This is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth. You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# DreamBooth - yuffish/mug-segmented\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1-base #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# DreamBooth - yuffish/mug-segmented\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1-base. The weights were trained on a photo of sks object using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
null
# DavidAU/MythoMist-7b-Q8_0-GGUF This model was converted to GGUF format from [`Gryphe/MythoMist-7b`](https://huggingface.co/Gryphe/MythoMist-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Gryphe/MythoMist-7b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/MythoMist-7b-Q8_0-GGUF --model mythomist-7b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/MythoMist-7b-Q8_0-GGUF --model mythomist-7b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mythomist-7b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/MythoMist-7b-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:other", "region:us" ]
null
2024-04-29T05:06:18+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-other #region-us
# DavidAU/MythoMist-7b-Q8_0-GGUF This model was converted to GGUF format from 'Gryphe/MythoMist-7b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/MythoMist-7b-Q8_0-GGUF\nThis model was converted to GGUF format from 'Gryphe/MythoMist-7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-other #region-us \n", "# DavidAU/MythoMist-7b-Q8_0-GGUF\nThis model was converted to GGUF format from 'Gryphe/MythoMist-7b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/9s1uob7
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T05:07:43+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/final31
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T05:09:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/difcblf
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T05:11:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model6
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T05:13:45+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-Chimdi-LORA-TUNED This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 2.1484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training Hardware This model was trained using Intel(R) Data Center GPU Max 1100 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 593 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.8528 | 0.8197 | 100 | 2.5372 | | 2.4491 | 1.6393 | 200 | 2.3103 | | 2.2851 | 2.4590 | 300 | 2.2148 | | 2.2162 | 3.2787 | 400 | 2.1720 | | 2.1935 | 4.0984 | 500 | 2.1484 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.0.post0+cxx11.abi - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer", "ipex", "GPU Max 1100"], "datasets": ["generator"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-Chimdi-LORA-TUNED", "results": []}]}
chchimdi/gemma-Chimdi-LORA-TUNED
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "ipex", "GPU Max 1100", "dataset:generator", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-04-29T05:16:54+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us
gemma-Chimdi-LORA-TUNED ======================= This model is a fine-tuned version of google/gemma-2b on the generator dataset. It achieves the following results on the evaluation set: * Loss: 2.1484 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training Hardware ----------------- This model was trained using Intel(R) Data Center GPU Max 1100 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 2 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.05 * training\_steps: 593 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.1.0.post0+URL * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0.post0+URL\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #ipex #GPU Max 1100 #dataset-generator #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* training\\_steps: 593", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.1.0.post0+URL\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/0mdrv42
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T05:19:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rinko_300_labeling This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3912 | 0.9897 | 48 | 2.2464 | | 2.2442 | 2.0 | 97 | 2.1167 | | 2.1047 | 2.9897 | 145 | 2.0317 | | 2.05 | 4.0 | 194 | 2.0067 | | 2.0626 | 4.9485 | 240 | 2.0068 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "rinko_300_labeling", "results": []}]}
ikno/rinko_300_labeling
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-29T05:20:46+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us
rinko\_300\_labeling ==================== This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.0068 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-06 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small CN - my voice with CER metric This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the my_CN_ds dataset. It achieves the following results on the evaluation set: - Loss: 0.7618 - Cer: 44.3038 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0001 | 100.0 | 100 | 0.7589 | 35.4430 | | 0.0 | 200.0 | 200 | 0.7597 | 44.3038 | | 0.0 | 300.0 | 300 | 0.7608 | 44.3038 | | 0.0 | 400.0 | 400 | 0.7618 | 44.3038 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["cn"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Svetlana0303/my_CN_ds_CER"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small CN - my voice with CER metric", "results": []}]}
Svetlana0303/whisper-small-cn_1
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "cn", "dataset:Svetlana0303/my_CN_ds_CER", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T05:21:45+00:00
[]
[ "cn" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #cn #dataset-Svetlana0303/my_CN_ds_CER #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
Whisper Small CN - my voice with CER metric =========================================== This model is a fine-tuned version of openai/whisper-small on the my\_CN\_ds dataset. It achieves the following results on the evaluation set: * Loss: 0.7618 * Cer: 44.3038 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 50 * training\_steps: 400 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 400\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #cn #dataset-Svetlana0303/my_CN_ds_CER #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* training\\_steps: 400\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.01_3iters_bs256_nodpo_full6w_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1](https://huggingface.co/ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1", "model-index": [{"name": "0.01_3iters_bs256_nodpo_full6w_iter_2", "results": []}]}
ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T05:21:50+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.01_3iters_bs256_nodpo_full6w_iter_2 This model is a fine-tuned version of ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.01_3iters_bs256_nodpo_full6w_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.01_3iters_bs256_nodpo_full6w_iter_2\n\nThis model is a fine-tuned version of ShenaoZhang/0.01_3iters_bs256_nodpo_full6w_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]