Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
# Umi-7B-Mistral-HP-0 Umi-7B-Mistral-HP-0 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [ajibawa-2023/WikiHow-Mistral-Instruct-7B](https://huggingface.co/ajibawa-2023/WikiHow-Mistral-Instruct-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Hermes-2-Pro-Mistral-7B layer_range: [0, 32] - model: ajibawa-2023/WikiHow-Mistral-Instruct-7B layer_range: [0, 32] merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "NousResearch/Hermes-2-Pro-Mistral-7B", "ajibawa-2023/WikiHow-Mistral-Instruct-7B"]}
Bandid/Umi-7B-Mistral-HP-0
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Hermes-2-Pro-Mistral-7B", "ajibawa-2023/WikiHow-Mistral-Instruct-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T18:30:55+00:00
null
null
# synCAI-144k-llama3 ## Overview synCAI-144k-llama3 is a large language model trained on the synCAI144kda dataset, designed to advance AI and consciousness studies. The model leverages 144,000 synthetic data points to build a comprehensive understanding of consciousness topics, making it suitable for various AI applications in consciousness research and exploration. ## Training Dataset The synCAI144kda dataset is used for training this model, containing: - **10,000 Unique Rows**: Diverse questions and responses related to consciousness studies, including philosophical, neuroscientific, and quantum aspects. - **144,000 Synthetic Rows**: Additional data from Mostly AI, providing a total of 3,024,000 individual datapoints to build a robust dataset for model training. ## Intended Use This model is intended for AI applications in consciousness studies and large-scale AI tasks. Potential use cases include: - Answering questions about consciousness, including philosophical and scientific aspects. - Assisting in AI-based consciousness research and analysis. - Exploring AI's role in advancing consciousness studies and related fields. ## Model Capabilities synCAI-144k-llama3 can: - Provide detailed responses to questions about consciousness studies. - Assist in generating datasets for AI development. - Support AI-based analysis and research in consciousness-related topics. ## Licensing and Usage Ensure compliance with any licensing agreements or usage restrictions when using this model. It is intended for academic and research purposes. If you use or share the model, provide appropriate attribution. ### Contributing Contributions to the model are welcome. If you have suggestions for improvements or additional use cases, consider submitting them for review and inclusion. ## Contact Information For further information about the model or additional questions, please contact [@innerinetco](https://x.com/innerinetco)
{"license": "llama3"}
InnerI/synCAI-144k-llama3
null
[ "license:llama3", "region:us" ]
null
2024-04-28T18:31:37+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b_readme_summarization This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 1.7397 | 0.9998 | 2915 | 1.7288 | | 1.3617 | 2.0 | 5831 | 1.5983 | | 0.8781 | 2.9998 | 8746 | 1.5681 | | 0.6176 | 3.9993 | 11660 | 1.6496 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "llama3-8b_readme_summarization", "results": []}]}
bunbohue/llama3-8b_readme_summarization_instruct
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-28T18:31:43+00:00
null
null
{"license": "openrail"}
Danikdsa/Miyeon
null
[ "license:openrail", "region:us" ]
null
2024-04-28T18:31:46+00:00
null
null
{}
harutennnn/9
null
[ "region:us" ]
null
2024-04-28T18:31:59+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SwinV2-GPT2_Mimic This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1623 - Rouge1: 30.2654 - Rouge2: 21.4825 - Rougel: 30.2471 - Rougelsum: 30.3167 - Gen Len: 9.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 125 | 0.1705 | 30.2654 | 21.4825 | 30.2471 | 30.3167 | 9.0 | | No log | 2.0 | 250 | 0.1623 | 30.2654 | 21.4825 | 30.2471 | 30.3167 | 9.0 | ### Framework versions - Transformers 4.37.1 - Pytorch 1.13.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.1
{"tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "SwinV2-GPT2_Mimic", "results": []}]}
ChayanM/SwinV2-GPT2_Mimic
null
[ "transformers", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:32:17+00:00
null
null
{"license": "openrail"}
janikovakov/Brian_Johnson_acdc
null
[ "license:openrail", "region:us" ]
null
2024-04-28T18:32:50+00:00
null
null
{}
suggisingh/xlsr_hindi_withLM_4gram
null
[ "region:us" ]
null
2024-04-28T18:34:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/id2j1fw
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T18:35:52+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/cjs4ccl
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T18:36:46+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/racbfx7
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T18:37:51+00:00
null
transformers
# hus960/jaskier-7b-dpo-v6.1-Q4_K_M-GGUF This model was converted to GGUF format from [`bardsai/jaskier-7b-dpo-v6.1`](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo hus960/jaskier-7b-dpo-v6.1-Q4_K_M-GGUF --model jaskier-7b-dpo-v6.1.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo hus960/jaskier-7b-dpo-v6.1-Q4_K_M-GGUF --model jaskier-7b-dpo-v6.1.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m jaskier-7b-dpo-v6.1.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "cc-by-4.0", "library_name": "transformers", "tags": ["llm", "7b", "llama-cpp", "gguf-my-repo"], "datasets": ["jondurbin/truthy-dpo-v0.1"]}
hus960/jaskier-7b-dpo-v6.1-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llm", "7b", "llama-cpp", "gguf-my-repo", "en", "dataset:jondurbin/truthy-dpo-v0.1", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:37:53+00:00
null
null
{}
pk3388/resnet-50-finetuned-eurosat
null
[ "region:us" ]
null
2024-04-28T18:40:28+00:00
text-generation
transformers
# saiga-phi-3-mini-4k saiga-phi-3-mini-4k is an SFT fine-tuned version of microsoft/Phi-3-mini-4k-instruct using a custom training dataset. This model was made with [Phinetune]() ## Process - Learning Rate: 1.41e-05 - Maximum Sequence Length: 2048 - Dataset: IlyaGusev/ru_turbo_saiga - Split: train ## πŸ’» Usage ```python !pip install -qU transformers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline model = "Slavator096/saiga-phi-3-mini-4k" tokenizer = AutoTokenizer.from_pretrained(model) # Example prompt prompt = "Your example prompt here" # Generate a response model = AutoModelForCausalLM.from_pretrained(model) pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) outputs = pipeline(prompt, max_length=50, num_return_sequences=1) print(outputs[0]["generated_text"]) ```
{"license": "cc-by-4.0"}
Slavator096/saiga-phi-3-mini-4k
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:42:12+00:00
null
null
{}
slavikRemouse/b4
null
[ "region:us" ]
null
2024-04-28T18:42:21+00:00
text-classification
transformers
{}
HanzhiZhang/CSCE5218_50percent
null
[ "transformers", "safetensors", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:43:49+00:00
null
null
{}
squaadinc/1714329832740x854082375539753000
null
[ "region:us" ]
null
2024-04-28T18:43:50+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2221 - Accuracy: 0.9245 - F1: 0.9244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8238 | 1.0 | 250 | 0.3150 | 0.9055 | 0.9043 | | 0.2515 | 2.0 | 500 | 0.2221 | 0.9245 | 0.9244 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9245, "name": "Accuracy"}, {"type": "f1", "value": 0.9243962596352164, "name": "F1"}]}]}]}
ujjwalgarg/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:44:40+00:00
null
null
{}
OneApiHarmon/edwadawda
null
[ "region:us" ]
null
2024-04-28T18:48:05+00:00
null
null
{}
tchen175/Food_classifier
null
[ "region:us" ]
null
2024-04-28T18:48:36+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Odeusys/mistral_emails
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:48:52+00:00
null
null
{}
PolyChirag/new-open-whisper
null
[ "region:us" ]
null
2024-04-28T18:49:00+00:00
null
null
{"license": "apache-2.0"}
Anwarkh1/pth
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-28T18:49:37+00:00
automatic-speech-recognition
transformers
{"license": "apache-2.0"}
PolyChirag/new-open-whisper-1
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:51:18+00:00
token-classification
transformers
{}
AliSaadatV/esm2_t12_35M_UR50D-finetuned-MOTIF_earlystop_70_15_15
null
[ "transformers", "tensorboard", "safetensors", "esm", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:51:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/2bqwk13
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:51:53+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct_fictional_arc_French_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_French_v1", "results": []}]}
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_French_v1
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T18:52:49+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
anushkat/NLPM3_DistilGP2-Beatles-Lyrics-finetuned
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:53:35+00:00
null
transformers
# Uploaded model - **Developed by:** Llamarider222 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Llamarider222/llama-3-8b
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-28T18:54:53+00:00
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ulasfiliz954 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ulasfiliz954 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ulasfiliz954 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "657.50 +/- 207.83", "name": "mean_reward", "verified": false}]}]}]}
ulasfiliz954/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-28T18:59:32+00:00
null
null
{}
macseem/clothes_classification
null
[ "region:us" ]
null
2024-04-28T19:00:15+00:00
text-generation
transformers
# Hypothesentrainer: LLM fΓΌr Feedback auf SchΓΌlerhypothesen Deutschsprachiges LLM, das basierend auf biologischen Forschungsfragen, SchΓΌlerhypothesn und Feedback von GPT-4-Turbo mittels Fine-Tuning auf diese spezifische Aufgabe angepasst wurde. Das Feedback von GPT-4 wurde mittels eines grâßeren Prompts so angepasst, dass das Resultat didaktisch mΓΆglichst sinnvoll ist. Basis dieses Modells ist das deutschsprachige Chat-LLM https://huggingface.co/LeoLM/leo-hessianai-13b-chat, dass in einer zweiten Pre-Training-Phase das Llama-2 Modell von Meta an die deutschen Sprache angepasst hat. Zudem wurde anschließend das Modell in einer grâßeren Fine-Tuning-Phase auf die Beantwortung von Wissensfragen angepasst. ## Model Details - **Model type:** Causal decoder-only transformer language model - **Finetuned from model [optional]:** https://huggingface.co/LeoLM/leo-hessianai-13b-chat - **Language(s) (NLP):** German - **License:** https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt ## Use in πŸ€—Transformers Installationen: ``` pip install transformers torch sentencepiece ``` FΓΌr schnellere Inferenz, nicht zwingend notwendig (Empfehlung aus https://huggingface.co/LeoLM/leo-hessianai-13b-chat): ```bash pip install packaging ninja pip install flash-attn==v2.1.1 --no-build-isolation pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary ``` Modell laden: ```python from transformers import pipeline import torch question = "Wie beeinflusst die Temperatur die AktivitΓ€t der Hefe im Pizzateig?" hypothesis = "Je hΓΆher die Temperatur, desto hΓΆher die HefeaktivitΓ€t im Pizzateig." prompt = format_prompt_chat(question, hypothesis, with_additional_prompt=True) generator = pipeline(model="Tobistd/hypothesen-13b", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False print(generator(prompt, do_sample=True, top_p=0.95, max_length=2048)) ``` ### Beispiel: **TODO** ## Prompting Template wie in https://huggingface.co/LeoLM/leo-hessianai-13b-chat: ``` """ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant """ ``` Beispielfunktion fΓΌr das Prompt Format: ```python def format_prompt_chat(question, hypothesis, with_additional_prompt=True): if with_additional_prompt: instructions = open_file("prompt/assistantInstructions.txt").strip() system_prompt = f"<|im_start|>system\n{inst}" else: system_prompt = "<|im_start|>system\nNachfolgend findest du eine biologische Forschungsfrage und eine entwickelte SchΓΌlerhypothese. Gebe dem SchΓΌler auf seine Hypothese ein angemessenes Feedback, das didaktisch mΓΆglichst wertvoll formuliert ist." system_prompt += "<|im_end|>" student= f"## Forschungsfrage: {question}\n## Hypothese: {hypothesis}" prompt = f"{system_prompt}\n<|im_start|>user\n{student}<|im_end|>\n<|im_start|>assistant\n" return prompt ``` ## Finetuning Details - FΓΌr das Training wurde ein Datensatz mit 1093 Zeilen verwendet. - Es wurde auf einem TPU v3-32 Pod-Slice trainiert (https://cloud.google.com/tpu/docs/v3?hl=de). - Das Modell und die Daten wurden auf 4-TPU-v3-8-Knoten, also auf insgesamt 32 TensorCores bzw. Devices verteilt. - Die Dauer des Training betrug 35 Minuten und 52 Sekunden. | Hyperparameter | Value | |---|---| | Num epochs | 2 | | Examples per epoch | 1093 // 8 // 32 = 4| | batch size | 4*32 = 128 | | Learning rate | 3e-5 | | LR scheduler | Cosine | | Optimizer | AdamW | | Adam betas | (0.9, 0.95) | ## Datensatz Details Beispiel: ``` Forschungsfrage: Welchen Einfluss hat die Temperatur auf die AktivitΓ€t der Hefe im Pizzateig? Hypothese: Durch die WΓ€rme verdampft das Wasser im Teig und steigt und steigt nach oben. Feedback: Eure Hypothese stellt keinen direkten Zusammenhang zwischen der Temperatur und der AktivitΓ€t der Hefe her. Es beschreibt eher einen allgemeinen Prozess bei ErwΓ€rmung. Versucht, den Fokus auf die spezifische Frage zu legen, wie die Temperatur die AktivitΓ€t der Hefe beeinflusst. Überlegt, wie die TemperaturΓ€nderung die Hefe zum Wachsen, sich zu vermehren oder ihre AktivitΓ€t zu Γ€ndern veranlassen kΓΆnnte. ``` ## Evaluation **TODO**
{"library_name": "transformers", "tags": []}
Tobistd/hypothesen-13b
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:00:21+00:00
null
null
{}
squaadinc/1714330895612x161047003008860160
null
[ "region:us" ]
null
2024-04-28T19:01:35+00:00
zero-shot-image-classification
transformers
{}
jmzzomg/clip-vit-base-patch32-onnx
null
[ "transformers", "onnx", "clip", "zero-shot-image-classification", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:01:45+00:00
reinforcement-learning
ml-agents
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐢 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: vicha-w/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play πŸ‘€
{"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]}
vicha-w/ppo-Pyramids
null
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
null
2024-04-28T19:02:56+00:00
null
null
{}
squaadinc/1714330987005x635338597628117000
null
[ "region:us" ]
null
2024-04-28T19:03:05+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/r4dp3jy
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:03:44+00:00
null
null
{}
squaadinc/1714331043244x310324238283702300
null
[ "region:us" ]
null
2024-04-28T19:04:05+00:00
token-classification
transformers
{}
AliSaadatV/esm2_t12_35M_UR50D-finetuned-REGION_earlystop_70_15_15
null
[ "transformers", "tensorboard", "safetensors", "esm", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:04:30+00:00
null
null
{}
hiatus999/test_cap
null
[ "region:us" ]
null
2024-04-28T19:04:49+00:00
null
null
{}
ikanikan/percobaan
null
[ "region:us" ]
null
2024-04-28T19:05:02+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ar This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1977 - F1: 0.8803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2179 | 1.0 | 188 | 0.1977 | 0.8803 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "tner/xlm-roberta-base-panx-dataset-ar", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-ar", "results": []}]}
RamziZayed/xlm-roberta-base-finetuned-panx-ar
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:tner/xlm-roberta-base-panx-dataset-ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:06:18+00:00
text-generation
peft
# Model Card for molbal/drama-mistral Text completion model trained on public domain novels. ## Model Details ### Model Description This model is trained on a large corpus of novels from various drama categories, including detective fiction, crime nonfiction, mystery fiction, gothic fiction, horror, romantic fiction, short stories, and western. The model is able to generate text that is similar in style and tone to the novels in the dataset. - **Developed by:** BΓ‘lint MolnΓ‘r-KalΓ³ https://huggingface.co/molbal - **Model type:** TText completion model - **Language(s) (NLP):** English only - **License:** Apache license 2.0 - **Finetuned from model [optional]:** unsloth/mistral-7b-v0.2-bnb-4bit ### Model Sources - **Repository:** https://huggingface.co/datasets/molbal/dramallama-novels - **Demo:** https://huggingface.co/datasets/molbal/dramallama-novels/viewer/default/train ### Training details Trained for ~19 hours on a RTX 4090 using Unsloth and its wrapper scripts found in https://github.com/molbal/llm-text-completion-finetune uing the cloud provider vast.ai
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["art"], "datasets": ["molbal/dramallama-novels"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit", "pipeline_tag": "text-generation"}
molbal/drama-mistral
null
[ "peft", "gguf", "art", "text-generation", "en", "dataset:molbal/dramallama-novels", "base_model:unsloth/mistral-7b-v0.2-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2024-04-28T19:07:46+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Reihaneh/wav2vec2_fy_nl_en_de_common_voice_16
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:08:05+00:00
null
fastai
# Amazing! πŸ₯³ Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using πŸ€— Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🀝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
mendozalopez/futurama
null
[ "fastai", "region:us" ]
null
2024-04-28T19:09:46+00:00
image-classification
null
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Using the dataset provided, only the spirals were used for training. Detecting accuracy is below 10%. Version not useful. ## Model Details ### Model Description - **Developed by:** Jeroen den Otter - **Funded by :** Minnesota State University | Physics and Astronomy department - **Model type:** YoloV9 Extensive - **Language(s) (NLP):** Python - **License:** Apache 2.0
{"language": ["en"], "license": "apache-2.0", "datasets": ["IT-Guy007/Galaxy-detection-data"], "pipeline_tag": "image-classification"}
IT-Guy007/YoloV9e-v2
null
[ "image-classification", "en", "dataset:IT-Guy007/Galaxy-detection-data", "license:apache-2.0", "region:us" ]
null
2024-04-28T19:10:07+00:00
null
fastai
# Amazing! πŸ₯³ Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using πŸ€— Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🀝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
mendozalopez/entregable2
null
[ "fastai", "region:us" ]
null
2024-04-28T19:10:51+00:00
automatic-speech-recognition
transformers
{}
chandra21/xlsr_hindi_withLM_4gram
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:11:14+00:00
null
null
{"license": "openrail"}
MarcyKanek/Leee
null
[ "license:openrail", "region:us" ]
null
2024-04-28T19:12:00+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
BotoxBernd/Schema-Links-mistral-7B-v0.1
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:12:39+00:00
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
Dam-Buty/phimily-0.1
null
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:13:05+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # SaimaAyub/roberta-base-finetuned-wikitext2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5061 - Validation Loss: 1.4342 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5537 | 1.4617 | 0 | | 1.5061 | 1.4342 | 1 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "roberta-base", "model-index": [{"name": "SaimaAyub/roberta-base-finetuned-wikitext2", "results": []}]}
SaimaAyub/roberta-base-finetuned-wikitext2
null
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:17:14+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_4iters_bs256_nodpo_only4w_iter_5 This model is a fine-tuned version of [ShenaoZhang/0.0001_4iters_bs256_nodpo_only4w_iter_4](https://huggingface.co/ShenaoZhang/0.0001_4iters_bs256_nodpo_only4w_iter_4) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.0001_4iters_bs256_nodpo_only4w_iter_4", "model-index": [{"name": "0.0001_4iters_bs256_nodpo_only4w_iter_5", "results": []}]}
ShenaoZhang/0.0001_4iters_bs256_nodpo_only4w_iter_5
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.0001_4iters_bs256_nodpo_only4w_iter_4", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:17:37+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/2sc0dwn
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:18:59+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct_fictional_arc_German_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_German_v1", "results": []}]}
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_German_v1
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:19:28+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]}
basakerdogan/cyber-jarvis-llama3-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-28T19:19:57+00:00
null
null
{}
TheStinger/ulteriormotivesv2
null
[ "region:us" ]
null
2024-04-28T19:21:30+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/otdyfg0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:23:36+00:00
null
null
{}
wallaceblaia/whisper-large-v3-pt
null
[ "region:us" ]
null
2024-04-28T19:25:33+00:00
null
null
{}
hippoleveque/codeparrot-ds
null
[ "region:us" ]
null
2024-04-28T19:25:50+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lunarsylph/stablecell_v47
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:26:03+00:00
null
transformers
# hus960/multimaster-7b-v6-Q4_K_M-GGUF This model was converted to GGUF format from [`ibivibiv/multimaster-7b-v6`](https://huggingface.co/ibivibiv/multimaster-7b-v6) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ibivibiv/multimaster-7b-v6) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo hus960/multimaster-7b-v6-Q4_K_M-GGUF --model multimaster-7b-v6.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo hus960/multimaster-7b-v6-Q4_K_M-GGUF --model multimaster-7b-v6.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m multimaster-7b-v6.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "model-index": [{"name": "multimaster-7b-v6", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.78, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.77, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.74, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 70.89}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 86.42, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 70.36, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6", "name": "Open LLM Leaderboard"}}]}]}
hus960/multimaster-7b-v6-Q4_K_M-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:26:35+00:00
null
null
{}
Danny-Moldovan/llama-7b-qlora-ultrachat-2
null
[ "safetensors", "region:us" ]
null
2024-04-28T19:26:51+00:00
null
null
{"license": "openrail"}
Coolwowsocoolwow/Curly_Brace
null
[ "license:openrail", "region:us" ]
null
2024-04-28T19:28:00+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gemma-sft-african-ultrachat-2000k This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the masakhane/african-ultrachat and the israel/untrachat_en datasets. It achieves the following results on the evaluation set: - Loss: 1.1549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0785 | 1.0 | 17748 | 1.2602 | | 0.6614 | 2.0 | 35496 | 1.1089 | | 0.2983 | 3.0 | 53244 | 1.1549 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "gemma", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["masakhane/african-ultrachat", "israel/untrachat_en"], "base_model": "google/gemma-7b", "model-index": [{"name": "zephyr-7b-gemma-sft-african-ultrachat-2000k", "results": []}]}
masakhane/zephyr-7b-gemma-sft-african-ultrachat-200k
null
[ "transformers", "safetensors", "gemma", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:masakhane/african-ultrachat", "dataset:israel/untrachat_en", "base_model:google/gemma-7b", "license:gemma", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:28:04+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/8tk5skx
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:29:45+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_DPO_model This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2506 - Rewards/chosen: 0.2764 - Rewards/rejected: -1.0388 - Rewards/accuracies: 1.0 - Rewards/margins: 1.3152 - Logps/rejected: -194.5943 - Logps/chosen: -156.0318 - Logits/rejected: -1.0532 - Logits/chosen: -0.8577 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.3358 | 0.79 | 200 | 0.3244 | 0.2277 | -0.7696 | 1.0 | 0.9973 | -191.9022 | -156.5185 | -1.0547 | -0.8590 | | 0.2428 | 1.59 | 400 | 0.2506 | 0.2764 | -1.0388 | 1.0 | 1.3152 | -194.5943 | -156.0318 | -1.0532 | -0.8577 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.2.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama_DPO_model", "results": []}]}
thorirhrafn/llama_DPO_model
null
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-04-28T19:30:34+00:00
text-generation
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> A fine-tuned Llama-3-8B-instruct trained to provide initial support and guidance in mental health by leveraging the power of large language models (LLMs). It's a fine-tuned version of the LLaMA 3 model specifically trained on mental health counseling conversations. <img src="https://cdn-uploads.huggingface.co/production/uploads/65f2cc4457fbd3c9fb65b34c/ujViqdS9zgW-IrEFbAKLw.jpeg" width="480" style="width:400px;"> ## Model Details Llama-3-8B-chat-psychotherapist Provides active listening: The model is trained to follow conversation flows and respond attentively to user expressions. Offers empathetic support: The model's responses are designed to be supportive and understanding, fostering a safe space for users to share their concerns. Guides users towards self-reflection: The model can ask open-ended questions and prompt users to explore their thoughts and feelings. Offers resources and information: Depending on the conversation, the model can provide users with relevant mental health resources and information. Important to Note: **This model is intended for initial support and guidance, not a replacement for professional mental health care. It's crucial to emphasize that users should seek professional help if needed. The model is still under development, and its responses may require human oversight for accuracy and safety.** Further Considerations: You can specify the type of mental health conversations the model is trained on (e.g., anxiety, depression, general well-being). Briefly mention the dataset used for fine-tuning to showcase the model's training grounds. If applicable, mention any limitations of the model, such as inability to diagnose or provide specific treatment plans. ## How to Get Started with the Model Use the code below to get started with the model. ### Read model ``` from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM model_id = "zementalist/llama-3-8B-chat-psychotherapist" config = PeftConfig.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, model_id) ``` ### Inference ``` question = "I feel like I don't exist and my body is not my own, I'm sombody else observing me, what causes these ideas?" messages = [ {"role": "system", "content": "Answer the following inquiry:"}, {"role": "user", "content": question} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.01 ) response = outputs[0][input_ids.shape[-1]:] output = tokenizer.decode(response, skip_special_tokens=True) print(output) ``` ## Training Details ### Training Data 1. [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) 2. [mpingale/mental-health-chat-dataset](https://huggingface.co/datasets/mpingale/mental-health-chat-dataset) 3. [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) #### Training Hyperparameters * epochs: 2 * learning rate: 6e-5 * learning rate scheduler type: constant * Warmup ratio: 0.03 * optimizer: paged_adamw_32bit * weight_decay: 0.001 * fp16: True ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> | **Step** | **Training Loss** | |---------|-------------------| | 100 | 2.637700 | | 200 | 2.350900 | | 300 | 2.234500 | | 400 | 2.230800 | | 500 | 2.194900 | | 600 | 2.041100 | | 700 | 2.033800 | | 800 | 1.988600 | | 900 | 1.898900 | | 1000 | 1.823600 | | 1100 | 1.943700 | | 1200 | 1.789900 | | 1300 | 1.840700 | | 1400 | 1.760900 | | 1500 | 1.718400 | | 1600 | 1.725600 | | 1700 | 1.717600 | | 1800 | 1.628200 | | 1900 | 1.693300 | | 2000 | 1.535600 | | 2100 | 1.630100 | | 2200 | 1.592300 | | 2300 | 1.628400 | | 2400 | 1.462500 | | 2500 | 1.504500 | | 2600 | 1.517500 | | 2700 | 1.375100 | | 2800 | 1.452800 | | 2900 | 1.527200 | | 3000 | 1.418600 | | 3100 | 1.437700 | | 3200 | 1.247900 | | 3300 | 1.134900 | | 3400 | 1.283800 | | 3500 | 1.378600 | | 3600 | 1.114600 | | 3700 | 1.300900 | | 3800 | 1.225700 | | 3900 | 1.190000 | | 4000 | 1.260200 | | 4100 | 1.053300 | | 4200 | 1.173000 | | 4300 | 1.275900 | | 4400 | 1.079800 | | 4500 | 1.121100 | | 4600 | 1.043100 | | 4700 | 1.121000 | | 4800 | 1.087200 | | 4900 | 1.102400 | | 5000 | 1.126100 | | 5100 | 1.182200 | | 5200 | 1.156500 | | 5300 | 0.979700 | | 5400 | 1.145300 | | 5500 | 1.018100 | | 5600 | 1.070500 | | 5700 | 0.954900 | | 5800 | 1.074200 | | 5900 | 1.022800 | | 6000 | 1.037700 | | 6100 | 0.913400 | | 6200 | 0.929500 | ### Inference example 1. Question: I feel like I don't exist and my body is not my own, like if I'm sombody else observin me, what could be this disorder? * * Answer: You're describing a phenomenon called depersonalization. It's a common symptom of anxiety disorders, post-traumatic stress disorder (PTSD), and other mental health conditions. <br> Depersonalization can make you feel like you're observing yourself from the outside, like you're watching a movie or reading a book about yourself. You might also feel like you're not in control of your body or that you're just a puppet. <br> This feeling can be very distressing and can make you wonder if you're even alive. It's a very disorienting and unsettling experience. <br> If you're experiencing depersonalization, it's important to seek help from a mental health professional. They can help you identify the underlying causes of your feelings and develop coping strategies to manage them. <br> 2. Question: I'm always confused when making decisions, I cannot choose one option, I must have only one option so I can make a decision, what are the causes of being indecisive? * * Answer: Indecisiveness is a common phenomenon that many people experience. It's not a mental health condition, but it can be frustrating and time-consuming. Here are some possible causes of indecisiveness: <ol> <li>Fear of making the wrong decision: Sometimes, people are afraid of making the wrong choice, which can lead to indecisiveness. This fear can be due to various factors such as fear of failure, fear of rejection, or fear of the unknown.</li> <li>Lack of clear goals: When people don't have clear goals, they can be indecisive. This is because they don't know what they want, which can lead to confusion and indecisiveness.</li> <li>Overthinking: Overthinking can be a major cause of indecisiveness. When people overthink, they can get caught up in their thoughts and be unable to make a decision.</li> <li>Lack of self-confidence: People who lack self-confidence can be indecisive. This is because they don't believe in themselves and are afraid of making the wrong decision.</li> <li>Avoidance: Avoiding a decision can be a cause of indecisiveness. When people avoid making a decision, they can be indecisive because they don't know</li> </ol> - **Developed by:** [Zeyad Khalid]
{"license": "mit", "library_name": "peft", "tags": ["SFT", "PEFT", "Mental Health", "Psychotherapy", "Fine-tuning", "Text Generation", "Chatbot"], "datasets": ["mpingale/mental-health-chat-dataset", "Amod/mental_health_counseling_conversations", "heliosbrahma/mental_health_chatbot_dataset"], "widget": [{"example_title": "Depersonalization", "messages": [{"role": "user", "content": "I feel like I don't exist and my body is not my own, like if I'm sombody else observin me, what could be this disorder?"}]}, {"example_title": "Confused", "messages": [{"role": "user", "content": "I'm always confused when making decisions, I cannot choose one option, I must have only one option so I can make a decision, what are the causes of being indecisive?"}]}], "inference": {"parameters": {"max_new_tokens": 256, "stop": ["<|end_of_text|>", "<|eot_id|>"]}}, "pipeline_tag": "text-generation"}
zementalist/llama-3-8B-chat-psychotherapist
null
[ "peft", "safetensors", "llama", "SFT", "PEFT", "Mental Health", "Psychotherapy", "Fine-tuning", "Text Generation", "Chatbot", "text-generation", "conversational", "dataset:mpingale/mental-health-chat-dataset", "dataset:Amod/mental_health_counseling_conversations", "dataset:heliosbrahma/mental_health_chatbot_dataset", "license:mit", "4-bit", "region:us" ]
null
2024-04-28T19:32:08+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.8488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1973 | 0.0305 | 20 | 1.1052 | | 1.0347 | 0.0610 | 40 | 0.9958 | | 0.9213 | 0.0915 | 60 | 0.9600 | | 0.8886 | 0.1220 | 80 | 0.9406 | | 0.9314 | 0.1524 | 100 | 0.9281 | | 0.9668 | 0.1829 | 120 | 0.9197 | | 0.887 | 0.2134 | 140 | 0.9128 | | 0.8727 | 0.2439 | 160 | 0.9066 | | 0.8571 | 0.2744 | 180 | 0.9005 | | 0.8833 | 0.3049 | 200 | 0.8963 | | 0.8466 | 0.3354 | 220 | 0.8912 | | 0.9015 | 0.3659 | 240 | 0.8865 | | 0.8602 | 0.3963 | 260 | 0.8822 | | 0.8989 | 0.4268 | 280 | 0.8788 | | 0.8452 | 0.4573 | 300 | 0.8758 | | 0.8764 | 0.4878 | 320 | 0.8730 | | 0.8702 | 0.5183 | 340 | 0.8708 | | 0.8758 | 0.5488 | 360 | 0.8676 | | 0.8071 | 0.5793 | 380 | 0.8638 | | 0.8473 | 0.6098 | 400 | 0.8618 | | 0.8822 | 0.6402 | 420 | 0.8586 | | 0.8742 | 0.6707 | 440 | 0.8560 | | 0.8526 | 0.7012 | 460 | 0.8533 | | 0.8116 | 0.7317 | 480 | 0.8511 | | 0.8593 | 0.7622 | 500 | 0.8488 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.1", "model-index": [{"name": "mistral_instruct_generation", "results": []}]}
Cem13/mistral_instruct_generation
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-28T19:32:46+00:00
audio-classification
transformers
{}
MarceloMarques/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
null
[ "transformers", "tensorboard", "safetensors", "audio-spectrogram-transformer", "audio-classification", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:34:55+00:00
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.28 +/- 0.10", "name": "mean_reward", "verified": false}]}]}]}
Joalbom14/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-28T19:35:56+00:00
text-generation
transformers
{}
pruning/fj4m6ks
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:37:06+00:00
null
null
{}
apandit99/few_shot_hans_validation
null
[ "region:us" ]
null
2024-04-28T19:37:27+00:00
null
null
{"license": "openrail"}
Coolwowsocoolwow/Disney_Fastplay_Announcer
null
[ "license:openrail", "region:us" ]
null
2024-04-28T19:38:41+00:00
text-generation
transformers
# NPC Model This repo contains the domain-specific NPC model we've fined-tuned from **Phi-3**, using LoRA. This model parses a text description of a game scene, and outputs commands like: * `say <player1> "Hello Adventurer, care to join me on a quest?` * `greet <player1>` * `attack <player1>` * Any other `<action> <param>` you add to the prompt! (We call these "skills"!) ⚠️ This model has been trained to **overfit** on our input prompt format. Follow it closely to reach optimal performance ⚠️ ## Usage **Make your life easier, use our [Python client library](https://github.com/GigaxGames/gigax)** * Instantiating the model using outlines: ```py from outlines import models from gigax.step import NPCStepper # Download model from the Hub model_name = "Gigax/NPC-LLM-7B" llm = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Our stepper takes in a Outlines model to enable guided generation # This forces the model to follow our output format model = models.Transformers(llm, tokenizer) # Instantiate a stepper: handles prompting + output parsing stepper = NPCStepper(model=model) ``` * Calling the model on your game's data: ```py from gigax.parse import CharacterAction from gigax.scene import ( Character, Item, Location, ProtagonistCharacter, ProtagonistCharacter, Skill, ParameterType, ) # Use sample data current_location = Location(name="Old Town", description="A quiet and peaceful town.") NPCs = [ Character( name="John the Brave", description="A fearless warrior", current_location=current_location, ) ] protagonist = ProtagonistCharacter( name="Aldren", description="Brave and curious", current_location=current_location, memories=["Saved the village", "Lost a friend"], quests=["Find the ancient artifact", "Defeat the evil warlock"], skills=[ Skill( name="Attack", description="Deliver a powerful blow", parameter_types=[ParameterType.character], ) ], psychological_profile="Determined and compassionate", ) items = [Item(name="Sword", description="A sharp blade")] events = [ CharacterAction( command="Say", protagonist=protagonist, parameters=[items[0], "What a fine sword!"], ) ] action = stepper.get_action( context=context, locations=locations, NPCs=NPCs, protagonist=protagonist, items=items, events=events, ) ``` ## Input prompt Here's a sample input prompt, showing you the format on which the model has been trained: ```txt - WORLD KNOWLEDGE: A vast open world full of mystery and adventure. - KNOWN LOCATIONS: Old Town - NPCS: John the Brave - CURRENT LOCATION: Old Town: A quiet and peaceful town. - CURRENT LOCATION ITEMS: Sword - LAST EVENTS: Aldren: Say Sword What a fine sword! - PROTAGONIST NAME: Aldren - PROTAGONIST PSYCHOLOGICAL PROFILE: Brave and curious - PROTAGONIST MEMORIES: Saved the village Lost a friend - PROTAGONIST PENDING QUESTS: Find the ancient artifact Defeat the evil warlock - PROTAGONIST ALLOWED ACTIONS: Attack <character> : Deliver a powerful blow Aldren: ``` ### πŸ€— We are currently working hard on training on the latest SoTA models (Phi-3, LLama, etc.), and on better data ! πŸ€— ## Model info - **Developed by:** Gigax - **Language(s) (NLP):** English - **Finetuned from model [optional]:** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - **Contact:** Join our [Discord](https://discord.gg/xES2Z8X4J6) for info, help, and more! ## How to Cite ```bibtex @misc{NPC-LLM-3_8B, url={[https://huggingface.co/Gigax/NPC-LLM-7B](https://huggingface.co/Gigax/NPC-LLM-3_8B)}, title={NPC-LLM-3_8B}, author={Gigax team} } ```
{"language": ["en"], "license": "mit"}
Gigax/NPC-LLM-3_8B
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:38:52+00:00
null
null
{}
nguyenminhly/mistral7b
null
[ "region:us" ]
null
2024-04-28T19:39:17+00:00
image-classification
sklearn
{"license": "mit", "library_name": "sklearn", "tags": ["art"], "datasets": ["fashion_mnist"], "metrics": ["accuracy"], "pipeline_tag": "image-classification"}
Prasanna18/FashionMNIST
null
[ "sklearn", "art", "image-classification", "dataset:fashion_mnist", "license:mit", "region:us" ]
null
2024-04-28T19:39:22+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_model This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6346 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 1.0498 | 0.84 | | 1.4824 | 2.0 | 14 | 0.6346 | 0.9 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral_model", "results": []}]}
zeyadusf/mistral_model
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-28T19:40:27+00:00
null
null
{}
M4RK-1/sokoban_solver
null
[ "region:us" ]
null
2024-04-28T19:40:54+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # banglat5-bcoqa This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.2253 | 0.03 | 700 | 2.8997 | | 3.5361 | 0.06 | 1400 | 2.5143 | | 2.8585 | 0.09 | 2100 | 2.3899 | | 2.7763 | 0.12 | 2800 | 2.3435 | | 2.6044 | 0.15 | 3500 | 2.3001 | | 2.6166 | 0.18 | 4200 | 2.2498 | | 2.5002 | 0.21 | 4900 | 2.1958 | | 2.4498 | 0.24 | 5600 | 2.1454 | | 2.4349 | 0.27 | 6300 | 2.1049 | | 2.3176 | 0.3 | 7000 | 2.0382 | | 2.2667 | 0.33 | 7700 | 1.9124 | | 2.2382 | 0.36 | 8400 | 1.7847 | | 2.1296 | 0.39 | 9100 | 1.6963 | | 2.0856 | 0.42 | 9800 | 1.6489 | | 2.0527 | 0.45 | 10500 | 1.6299 | | 2.0363 | 0.48 | 11200 | 1.6085 | | 1.9999 | 0.51 | 11900 | 1.5947 | | 1.9888 | 0.54 | 12600 | 1.5661 | | 1.9438 | 0.58 | 13300 | 1.5666 | | 1.9365 | 0.61 | 14000 | 1.5636 | | 1.9311 | 0.64 | 14700 | 1.5502 | | 1.9649 | 0.67 | 15400 | 1.5419 | | 1.9782 | 0.7 | 16100 | 1.5309 | | 1.8764 | 0.73 | 16800 | 1.5147 | | 1.9236 | 0.76 | 17500 | 1.5066 | | 1.8818 | 0.79 | 18200 | 1.4963 | | 1.9031 | 0.82 | 18900 | 1.4939 | | 1.8583 | 0.85 | 19600 | 1.4923 | | 1.8436 | 0.88 | 20300 | 1.4948 | | 1.8258 | 0.91 | 21000 | 1.4784 | | 1.8701 | 0.94 | 21700 | 1.4642 | | 1.8413 | 0.97 | 22400 | 1.4807 | | 1.8417 | 1.0 | 23100 | 1.4654 | | 1.7898 | 1.03 | 23800 | 1.4711 | | 1.7661 | 1.06 | 24500 | 1.4632 | | 1.7223 | 1.09 | 25200 | 1.4514 | | 1.7461 | 1.12 | 25900 | 1.4568 | | 1.7457 | 1.15 | 26600 | 1.4492 | | 1.7588 | 1.18 | 27300 | 1.4500 | | 1.6475 | 1.21 | 28000 | 1.4515 | | 1.7428 | 1.24 | 28700 | 1.4377 | | 1.782 | 1.27 | 29400 | 1.4456 | | 1.6906 | 1.3 | 30100 | 1.4435 | | 1.6865 | 1.33 | 30800 | 1.4378 | | 1.7806 | 1.36 | 31500 | 1.4327 | | 1.7444 | 1.39 | 32200 | 1.4372 | | 1.7136 | 1.42 | 32900 | 1.4293 | | 1.7252 | 1.45 | 33600 | 1.4246 | | 1.7209 | 1.48 | 34300 | 1.4218 | | 1.7523 | 1.51 | 35000 | 1.4283 | | 1.6808 | 1.54 | 35700 | 1.4216 | | 1.7167 | 1.57 | 36400 | 1.4246 | | 1.7246 | 1.6 | 37100 | 1.4171 | | 1.7614 | 1.63 | 37800 | 1.4204 | | 1.6704 | 1.66 | 38500 | 1.4116 | | 1.6823 | 1.7 | 39200 | 1.4213 | | 1.6744 | 1.73 | 39900 | 1.4236 | | 1.7086 | 1.76 | 40600 | 1.4197 | | 1.7179 | 1.79 | 41300 | 1.4156 | | 1.6223 | 1.82 | 42000 | 1.4205 | | 1.6817 | 1.85 | 42700 | 1.4159 | | 1.6786 | 1.88 | 43400 | 1.4131 | | 1.7163 | 1.91 | 44100 | 1.4147 | | 1.6381 | 1.94 | 44800 | 1.4131 | | 1.6961 | 1.97 | 45500 | 1.4134 | | 1.6247 | 2.0 | 46200 | 1.4135 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{"tags": ["generated_from_trainer"], "base_model": "csebuetnlp/banglat5", "model-index": [{"name": "banglat5-bcoqa", "results": []}]}
arbitropy/banglat5-bcoqa
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:csebuetnlp/banglat5", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:41:45+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct_fictional_arc_Italian_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_Italian_v1", "results": []}]}
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Italian_v1
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:43:09+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/final21
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:46:27+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/z39r6dx
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:46:46+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xsum_aligned_smallT5_full This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the lilferrit/xsum_t5_distillation dataset. It achieves the following results on the evaluation set: - Loss: 2.4093 - Rouge1: 22.8498 - Rouge2: 4.7818 - Rougel: 17.2861 - Rougelsum: 18.0665 - Gen Len: 33.6366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adafactor - lr_scheduler_type: constant - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 0.0 | 5 | 2.6444 | 22.3341 | 4.3395 | 16.2507 | 17.8303 | 46.2437 | | No log | 0.0 | 10 | 2.4093 | 22.8498 | 4.7818 | 17.2861 | 18.0665 | 33.6366 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["lilferrit/xsum_t5_distillation"], "metrics": ["rouge"], "base_model": "google-t5/t5-small", "model-index": [{"name": "xsum_aligned_smallT5_full", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "lilferrit/xsum_t5_distillation", "type": "lilferrit/xsum_t5_distillation"}, "metrics": [{"type": "rouge", "value": 22.8498, "name": "Rouge1"}]}]}]}
paulh27/xsum_aligned_smallT5_full
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:lilferrit/xsum_t5_distillation", "base_model:google-t5/t5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:48:45+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
uday91/Llama-2-7b-chat-finetune-new2
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:50:55+00:00
text-generation
transformers
# Griffon-7B-Model_Stock Griffon-7B-Model_Stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): ## 🧩 Configuration ```yaml models: - model: ResplendentAI/Datura_7B - model: ResplendentAI/Paradigm_7B - model: DreadPoor/Harpy-7B-Model_Stock - model: cgato/Thespis-Mistral-7b-v0.7 merge_method: model_stock base_model: ResplendentAI/Paradigm_7B dtype: bfloat16 ``` ## πŸ’» Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DreadPoor/Griffon-7B-Model_Stock" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit"]}
DreadPoor/Griffon-7B-Model_Stock
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:51:38+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/3adrn5n
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:54:21+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Ketki0203/punctuation-predict This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: nan - Validation Loss: nan - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | nan | nan | 0 | | nan | nan | 1 | | nan | nan | 2 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-multilingual-cased", "model-index": [{"name": "sejaltapdiya/punctuation-predict", "results": []}]}
sejaltapdiya/punctuation-predict
null
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:54:38+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zura1101/gemma_peft_model_emotion_detection
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:54:43+00:00
null
null
{"license": "openrail"}
janikovakov/Eric_Cartman
null
[ "license:openrail", "region:us" ]
null
2024-04-28T19:55:41+00:00
text-generation
transformers
This is a 4-bit quantized version of Phi-3 4k Instruct. Quantization done with: ``` bnb_config = BitsAndBytesConfig( load_in_4bit = True, bnb_4bit_use_double_quant = True, bnb_4bit_quant_type = 'nf4', bnb_4bit_compute_dtype = torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( foundation_model_name, device_map = 'auto', quantization_config = bnb_config, trust_remote_code = True ) ```
{}
bongodongo/phi-3-mini-4k-instruct-q4
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-28T19:55:55+00:00
null
null
{}
vfaria/robertatest
null
[ "region:us" ]
null
2024-04-28T19:56:22+00:00
null
transformers
# Uploaded model - **Developed by:** EternalRecursion - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
EternalRecursion/llm_clone_llama
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-28T19:57:04+00:00
null
null
{}
DRAGOO/VGG16
null
[ "region:us" ]
null
2024-04-28T19:58:12+00:00
null
null
{}
Firemannbossss/ColePalmer
null
[ "region:us" ]
null
2024-04-28T19:58:16+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v1", "results": []}]}
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v1
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T19:58:47+00:00
null
null
{}
Fduv/fine_tuned_text-to-sql_codegemma-7b-v0.2_GGUF_q4_k_m
null
[ "gguf", "region:us" ]
null
2024-04-28T20:01:38+00:00
text2text-generation
transformers
{}
asimokby/Turkish-OSCAR-GEC-v0
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-28T20:01:46+00:00
null
null
{"license": "openrail"}
janikovakov/Kanye_West
null
[ "license:openrail", "region:us" ]
null
2024-04-28T20:01:59+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/e0d4h8r
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-28T20:02:17+00:00