Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
{"license": "apache-2.0"}
ramimu/ali-ai-model
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-24T07:00:31+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs", "results": []}]}
tedad09/PolizzeDonut-UltimaProvaCluster-Cluster1di7-5epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:00:32+00:00
null
null
# DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF This model was converted to GGUF format from [`TeeZee/GALAXY-16B-v1.0`](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF --model galaxy-16b-v1.0.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF --model galaxy-16b-v1.0.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m galaxy-16b-v1.0.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]}
DavidAU/GALAXY-16B-v1.0-Q8_0-GGUF
null
[ "gguf", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "en", "dataset:Intel/orca_dpo_pairs", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "dataset:Open-Orca/SlimOrca", "dataset:MinervaAI/Aesir-Preview", "dataset:allenai/ultrafeedback_binarized_cleaned", "license:apache-2.0", "region:us" ]
null
2024-04-24T07:01:32+00:00
null
null
{"license": "apache-2.0"}
FydeOS/Qwen1.5-1_8B_rkLLM
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:02:20+00:00
null
null
{"license": "openrail"}
Coolwowsocoolwow/Mrs_Martin
null
[ "license:openrail", "region:us" ]
null
2024-04-24T07:02:52+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "Trelis/Llama-2-7b-chat-hf-sharded-bf16"}
Vibhav1612/LlamaQuantized
null
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-04-24T07:03:32+00:00
null
null
{}
Aishu1102/gpt2
null
[ "region:us" ]
null
2024-04-24T07:04:13+00:00
null
null
{"license": "mit"}
ljf0219/test
null
[ "license:mit", "region:us" ]
null
2024-04-24T07:04:30+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Flant5-offensive-multilingual This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0012 - Precision: 0.6875 - Recall: 0.6040 - F1: 0.6430 - Total Predictions: 3532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Total Predictions | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:-----------------:| | 0.2343 | 1.0 | 3753 | 0.0011 | 0.5924 | 0.6481 | 0.6190 | 3532 | | 0.0008 | 2.0 | 7506 | 0.0010 | 0.6903 | 0.5416 | 0.6070 | 3532 | | 0.0006 | 3.0 | 11259 | 0.0011 | 0.6012 | 0.7238 | 0.6569 | 3532 | | 0.0005 | 4.0 | 15012 | 0.0011 | 0.6882 | 0.5765 | 0.6274 | 3532 | | 0.0004 | 5.0 | 18765 | 0.0012 | 0.6875 | 0.6040 | 0.6430 | 3532 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.0.0+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "base_model": "google/flan-t5-base", "model-index": [{"name": "Flant5-offensive-multilingual", "results": []}]}
JenniferHJF/Flant5-offensive-multilingual
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:04:36+00:00
null
null
# Yamshadowexperiment28Experiment26-7B Yamshadowexperiment28Experiment26-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: automerger/YamshadowExperiment28-7B - model: yam-peleg/Experiment26-7B merge_method: model_stock base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Yamshadowexperiment28Experiment26-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
automerger/Yamshadowexperiment28Experiment26-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "region:us" ]
null
2024-04-24T07:04:54+00:00
null
null
{}
GraydientPlatformAPI/loras-april24b
null
[ "region:us" ]
null
2024-04-24T07:05:38+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"license": "apache-2.0", "library_name": "transformers"}
Akirami/truthy-llama3-8b
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T07:07:01+00:00
null
null
# DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF This model was converted to GGUF format from [`TeeZee/GALAXY-16B-v1.0`](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TeeZee/GALAXY-16B-v1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF --model galaxy-16b-v1.0.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF --model galaxy-16b-v1.0.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m galaxy-16b-v1.0.Q4_K_M.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "Open-Orca/SlimOrca", "MinervaAI/Aesir-Preview", "allenai/ultrafeedback_binarized_cleaned"]}
DavidAU/GALAXY-16B-v1.0-Q4_K_M-GGUF
null
[ "gguf", "not-for-all-audiences", "llama-cpp", "gguf-my-repo", "en", "dataset:Intel/orca_dpo_pairs", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW", "dataset:Open-Orca/SlimOrca", "dataset:MinervaAI/Aesir-Preview", "dataset:allenai/ultrafeedback_binarized_cleaned", "license:apache-2.0", "region:us" ]
null
2024-04-24T07:07:06+00:00
text-generation
transformers
{}
aemack/Qwen-1_8B-Chat_ihateyou_ilovecheese
null
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "autotrain_compatible", "region:us" ]
null
2024-04-24T07:07:09+00:00
null
null
# SecGPT 网络安全大模型 ### **项目** - [GitHub](https://github.com/Clouditera/SecGPT) - [原版Pytorch模型](https://huggingface.co/clouditera/secgpt) ### **简介** - 随着大语言模型的崛起,网安大模型也掀起了一股热潮,本人在逛 GitHub 时偶然发现了云起无垠开源的 SecGPT,但官方调用脚本中使用了 Cuda,且没有提供 GGUF 版本,故使用了 [llama.cpp](https://github.com/ggerganov/llama.cpp) 的 convert 脚本进行转换,并上传至huggingface ### **测试设备** - MacBook Pro 16 寸 - M3 Max - 48 GB ### **Usage** - 分为 `secgpt.gguf` 与 `secgpt-mini.gguf` 两个版本 - `secgpt.gguf` 需 26.5 G 显存 - `secgpt-mini.gguf` 需 1.6 G 显存 - 使用方法 - 将 GGUF 导入[LM Studio](https://lmstudio.ai/),并使用 `secgpt-all.json` 作为参数配置
{"language": ["zh"], "license": "apache-2.0", "tags": ["cybersecurity"]}
LingJingMaster/Clouditera-SecGPT-GGUF
null
[ "gguf", "cybersecurity", "zh", "license:apache-2.0", "region:us" ]
null
2024-04-24T07:07:34+00:00
null
null
This model is trained to recognise Indian Sign Language(ISL) which is trained using video dataset available here -- https://zenodo.org/records/4010759
{"language": ["en"], "license": "mit", "tags": ["art"], "metrics": ["Testing accuracy of 44%"]}
cdsteameight/ISL-SignLanguageTranslation
null
[ "art", "en", "license:mit", "region:us" ]
null
2024-04-24T07:07:40+00:00
null
null
{}
ssamperr/detr_v2_30
null
[ "region:us" ]
null
2024-04-24T07:07:41+00:00
null
null
{}
maharengarajan/summarization-model
null
[ "region:us" ]
null
2024-04-24T07:08:46+00:00
null
null
# DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF This model was converted to GGUF format from [`ajibawa-2023/Scarlett-Llama-3-8B`](https://huggingface.co/ajibawa-2023/Scarlett-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ajibawa-2023/Scarlett-Llama-3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF --model scarlett-llama-3-8b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF --model scarlett-llama-3-8b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m scarlett-llama-3-8b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "other", "tags": ["art", "philosophy", "romance", "jokes", "advice", "code", "llama-cpp", "gguf-my-repo"], "license_name": "llama3", "license_link": "LICENSE", "model-index": [{"name": "Scarlett-Llama-3-8B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.63, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 83.86, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.46, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 56.27}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.06, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 47.31, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Scarlett-Llama-3-8B-Q8_0-GGUF
null
[ "gguf", "art", "philosophy", "romance", "jokes", "advice", "code", "llama-cpp", "gguf-my-repo", "en", "license:other", "model-index", "region:us" ]
null
2024-04-24T07:08:53+00:00
text-generation
transformers
## Model Card for Model ID French-Alpaca based on microsoft/Phi-3-mini-4k-instruct 4k is the context length (in tokens) ![image/jpeg](https://github.com/jpacifico/French-Alpaca/blob/main/Assets/French-Alpaca_500px.png?raw=true) ### Model Description fine-tuned from the original French-Alpaca-dataset entirely generated with OpenAI GPT-3.5-turbo. French-Alpaca is a general model and can itself be finetuned to be specialized for specific use cases. The fine-tuning method is inspired from https://crfm.stanford.edu/2023/03/13/alpaca.html Quantized GGUF version : coming soon ### Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("jpacifico/French-Alpaca-Phi-3-mini-4k-instruct-v1.0") messages = [ {"role": "system", "content": "Vous êtes un assistant numérique serviable. Veuillez fournir des informations sûres, éthiques et précises à l'utilisateur."}, {"role": "user", "content": "Pouvez-vous fournir des façons de manger des combinaisons de bananes et de fruits du dragon ?"}, {"role": "assistant", "content": "Bien sûr ! Voici quelques façons de manger des bananes et des fruits du dragon ensemble : 1. Smoothie à la banane et au fruit du dragon : mixez des bananes et des fruits du dragon avec un peu de lait et de miel. 2. Salade de banane et de fruit du dragon : mélangez des bananes tranchées et des fruits du dragon avec un peu de jus de citron et de miel."}, {"role": "user", "content": "Et pour résoudre une équation 2x + 3 = 7 ?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ### Limitations The French-Alpaca model is a quick demonstration that a 3B base model can be easily fine-tuned to specialize in a particular language. It does not have any moderation mechanisms. - **Developed by:** Jonathan Pacifico, 2024 - **Model type:** LLM - **Language(s) (NLP):** French - **License:** MIT
{"language": ["fr", "en"], "license": "mit", "library_name": "transformers", "tags": ["Phi-3", "french", "Phi-3-mini", "french-alpaca"], "datasets": ["jpacifico/French-Alpaca-dataset-Instruct-110K"]}
jpacifico/French-Alpaca-Phi-3-mini-4k-instruct-v1.0
null
[ "transformers", "safetensors", "phi3", "text-generation", "Phi-3", "french", "Phi-3-mini", "french-alpaca", "conversational", "custom_code", "fr", "en", "dataset:jpacifico/French-Alpaca-dataset-Instruct-110K", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:08:54+00:00
null
null
# DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF This model was converted to GGUF format from [`ajibawa-2023/Young-Children-Storyteller-Mistral-7B`](https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF --model young-children-storyteller-mistral-7b.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF --model young-children-storyteller-mistral-7b.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m young-children-storyteller-mistral-7b.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["story", "young children", "educational", "knowledge", "llama-cpp", "gguf-my-repo"], "datasets": ["ajibawa-2023/Children-Stories-Collection"], "model-index": [{"name": "Young-Children-Storyteller-Mistral-7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 68.69, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.67, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.11, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 62.62}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 81.22, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 65.2, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Young-Children-Storyteller-Mistral-7B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/Young-Children-Storyteller-Mistral-7B-Q6_K-GGUF
null
[ "gguf", "story", "young children", "educational", "knowledge", "llama-cpp", "gguf-my-repo", "en", "dataset:ajibawa-2023/Children-Stories-Collection", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-24T07:10:09+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lnmt This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7972 - Accuracy: {'accuracy': 0.6208813838550247} - F1 Macro: {'f1': 0.3506606197441491} - F1 Weighted: {'f1': 0.6062668131729496} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:---------------------------:|:--------------------------:| | No log | 1.0 | 315 | 2.0101 | {'accuracy': 0.5226523887973641} | {'f1': 0.21712503989679657} | {'f1': 0.459506356775351} | | 2.2969 | 2.0 | 630 | 1.6716 | {'accuracy': 0.5963756177924218} | {'f1': 0.28274236255720786} | {'f1': 0.5462732390600772} | | 2.2969 | 3.0 | 945 | 1.5967 | {'accuracy': 0.6112026359143328} | {'f1': 0.3279242367574629} | {'f1': 0.5787485773304204} | | 1.1815 | 4.0 | 1260 | 1.5843 | {'accuracy': 0.6202635914332785} | {'f1': 0.3402580752236545} | {'f1': 0.5918094876585247} | | 0.7089 | 5.0 | 1575 | 1.6031 | {'accuracy': 0.6219110378912686} | {'f1': 0.3471078372421453} | {'f1': 0.5941366500585097} | | 0.7089 | 6.0 | 1890 | 1.6876 | {'accuracy': 0.6149093904448105} | {'f1': 0.35129077551349414} | {'f1': 0.5935341462382293} | | 0.4532 | 7.0 | 2205 | 1.7093 | {'accuracy': 0.6208813838550247} | {'f1': 0.35300405317763817} | {'f1': 0.6021058143955713} | | 0.3178 | 8.0 | 2520 | 1.7752 | {'accuracy': 0.6138797364085667} | {'f1': 0.35479307050001907} | {'f1': 0.5998441386303183} | | 0.3178 | 9.0 | 2835 | 1.7888 | {'accuracy': 0.6188220757825371} | {'f1': 0.3553222770673821} | {'f1': 0.6033599756075638} | | 0.2417 | 10.0 | 3150 | 1.7972 | {'accuracy': 0.6208813838550247} | {'f1': 0.3506606197441491} | {'f1': 0.6062668131729496} | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "lnmt", "results": []}]}
carmenlozano/lnmt
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:10:28+00:00
null
null
{}
kanishka7878/modeltest17
null
[ "region:us" ]
null
2024-04-24T07:10:56+00:00
null
null
{}
Rustamello/Dima
null
[ "region:us" ]
null
2024-04-24T07:11:25+00:00
text-generation
transformers
# OpenLLaMA: An Open Reproduction of LLaMA In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation. ## Dataset and Training We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT | | ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- | | anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 | | anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 | | arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 | | arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 | | ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 | | hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 | | openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 | | piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 | | record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 | | record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 | | rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 | | wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 | | winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 | | Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
{"license": "apache-2.0", "datasets": ["togethercomputer/RedPajama-Data-1T"]}
titanbot/ct2-int8-open-llama-7b
null
[ "transformers", "llama", "text-generation", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:11:49+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/KnutJaegersberg/Llama3-Deita-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-Deita-8b-GGUF/resolve/main/Llama3-Deita-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "base_model": "KnutJaegersberg/Llama3-Deita-8b", "quantized_by": "mradermacher"}
mradermacher/Llama3-Deita-8b-GGUF
null
[ "transformers", "gguf", "en", "base_model:KnutJaegersberg/Llama3-Deita-8b", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:13:51+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ValiantLabs/Llama3-70B-ShiningValiant2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-ShiningValiant2-GGUF/resolve/main/Llama3-70B-ShiningValiant2.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["shining-valiant", "shining-valiant-2", "valiant", "valiant-labs", "llama", "llama-3", "llama-3-instruct", "llama-3-instruct-70b", "70b", "conversational", "chat", "instruct"], "base_model": "ValiantLabs/Llama3-70B-ShiningValiant2", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct/blob/main/LICENSE", "license_name": "llama3", "model_type": "llama", "quantized_by": "mradermacher"}
mradermacher/Llama3-70B-ShiningValiant2-GGUF
null
[ "transformers", "gguf", "shining-valiant", "shining-valiant-2", "valiant", "valiant-labs", "llama", "llama-3", "llama-3-instruct", "llama-3-instruct-70b", "70b", "conversational", "chat", "instruct", "en", "base_model:ValiantLabs/Llama3-70B-ShiningValiant2", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:14:00+00:00
null
transformers
# Uploaded model - **Developed by:** Tina2088 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Tina2088/lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:15:41+00:00
text-generation
transformers
{"license": "mit"}
oofnan/stegBot2
null
[ "transformers", "pytorch", "gemma", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-24T07:16:14+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "276.45 +/- 20.44", "name": "mean_reward", "verified": false}]}]}]}
nikola13/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-24T07:16:22+00:00
null
null
{"license": "openrail"}
coreliastreet/Lana_Del_Rey
null
[ "license:openrail", "region:us" ]
null
2024-04-24T07:17:42+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2", "results": []}]}
AlignmentResearch/robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-2
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:18:09+00:00
null
null
{}
TanvirMungekar/Llama3-Complete
null
[ "gguf", "region:us" ]
null
2024-04-24T07:18:21+00:00
reinforcement-learning
null
# PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'jiaqianwu/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
{"tags": ["LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-160.20 +/- 91.90", "name": "mean_reward", "verified": false}]}]}]}
jiaqianwu/ppo-CartPole-v1
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
null
2024-04-24T07:18:49+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut_synDB_big This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3353 | 0.92 | 60 | 0.1525 | | 0.1389 | 1.38 | 90 | 0.0705 | | 0.1055 | 1.85 | 120 | 0.0595 | | 0.0701 | 2.31 | 150 | 0.0727 | | 0.0547 | 2.77 | 180 | 0.0750 | | 0.0454 | 3.23 | 210 | 0.0714 | | 0.0371 | 3.69 | 240 | 0.0609 | | 0.0332 | 4.15 | 270 | 0.0629 | | 0.0269 | 4.62 | 300 | 0.0583 | | 0.0233 | 5.08 | 330 | 0.0601 | | 0.0219 | 5.54 | 360 | 0.0576 | | 0.0227 | 6.0 | 390 | 0.0569 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "donut_synDB_big", "results": []}]}
Donut01/donut_synDB_big
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:18:52+00:00
null
null
{"license": "apache-2.0"}
yan-hao-tian/vw_convnext-ti_cityscapes
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:19:04+00:00
text2text-generation
transformers
# PLLaVA Model Card ## Model details **Model type:** PLLaVA-13B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-13b-hf **Model date:** PLLaVA-13B was trained in April 2024. **Paper or resources for more information:** - github repo: https://github.com/magic-research/PLLaVA - project page: https://pllava.github.io/ - paper link: https://arxiv.org/abs/2404.16994 ## License llava-hf/llava-v1.6-vicuna-13b-hf license. **Where to send questions or comments about the model:** https://github.com/magic-research/PLLaVA/issues ## Intended use **Primary intended uses:** The primary use of PLLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT ## Evaluation dataset A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.
{"license": "apache-2.0", "tags": ["video LLM"], "datasets": ["OpenGVLab/VideoChat2-IT"]}
ermu2001/pllava-13b
null
[ "transformers", "safetensors", "llava", "text2text-generation", "video LLM", "dataset:OpenGVLab/VideoChat2-IT", "arxiv:2404.16994", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-24T07:19:04+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
chcho/OrpoLlama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:19:31+00:00
null
null
{"license": "apache-2.0"}
yan-hao-tian/vw_convnext-s_cityscapes
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:19:34+00:00
null
null
{}
chanpaca/pacapaca
null
[ "region:us" ]
null
2024-04-24T07:19:49+00:00
null
null
{"license": "wtfpl"}
autismanon/sdxl_loradump
null
[ "license:wtfpl", "region:us" ]
null
2024-04-24T07:19:53+00:00
null
null
{"license": "apache-2.0"}
yan-hao-tian/vw_convnext-b_cityscapes
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:19:55+00:00
text-generation
transformers
# Uploaded model - **Developed by:** akbargherbal - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
akbargherbal/think_tanks_v02_16bit
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:20:35+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kangXn/engu-sb-mde
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:20:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Andrei481/Mistral-7B-Instruct-v0.2-hakurei-ro
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:20:40+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # sourav10/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5831 - Validation Loss: 1.7498 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4181 | 2.0762 | 0 | | 1.8471 | 1.7498 | 1 | | 1.5831 | 1.7498 | 2 | ### Framework versions - Transformers 4.40.0 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "sourav10/my_awesome_qa_model", "results": []}]}
sourav10/my_awesome_qa_model
null
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:20:43+00:00
null
null
{"license": "apache-2.0"}
yan-hao-tian/vw_convnext-l_cityscapes
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:20:45+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs", "results": []}]}
tedad09/PolizzeDonut-UltimaProvaCluster-Cluster2di7-5epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:20:45+00:00
null
null
{"license": "apache-2.0"}
yan-hao-tian/vw_convnext-xl_cityscapes
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:21:06+00:00
text-generation
transformers
{}
santoshsto/mistral-4x7b-codegen-MOE-4bit
null
[ "transformers", "safetensors", "mixtral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T07:21:27+00:00
null
transformers
UniMERNet: A Universal Network for Mathematical Expression Recognition in Real-World Scenarios. Visit our GitHub repository at [unimernet](https://github.com/opendatalab/unimernet) for more information.
{"license": "apache-2.0"}
wanderkid/unimernet
null
[ "transformers", "pytorch", "vision-encoder-decoder", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:22:04+00:00
null
null
{}
Ziq2525/gpt_fr_context
null
[ "region:us" ]
null
2024-04-24T07:22:43+00:00
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [tensorplex-labs/pretraining-sn9-7B-5](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-5) * [tensorplex-labs/pretraining-sn9-7B-2](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: tensorplex-labs/pretraining-sn9-7B-2 layer_range: [0, 30] - model: tensorplex-labs/pretraining-sn9-7B-5 layer_range: [0, 30] merge_method: slerp base_model: tensorplex-labs/pretraining-sn9-7B-5 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.85 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["tensorplex-labs/pretraining-sn9-7B-5", "tensorplex-labs/pretraining-sn9-7B-2"]}
Sumail/zhun04
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:tensorplex-labs/pretraining-sn9-7B-5", "base_model:tensorplex-labs/pretraining-sn9-7B-2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:24:37+00:00
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Models Merged The following models were included in the merge: * [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B) * [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) ## Benchmark results ### 1. MT-Bench from lmsys We adapted the code from [FastChat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) to benchmark our model with GPT-4 as a judge. Here is the result ``` ########## First turn ########## score model turn wizard-zephyr-8x22b 1 9.1625 ########## Second turn ########## score model turn wizard-zephyr-8x22b 2 8.873418 ########## Average ########## score model wizard-zephyr-8x22b 9.018868 ``` The score is slightly lower than [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^
{"license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["alpindale/WizardLM-2-8x22B", "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1"]}
tlphams/Wizard-Zephyr-Orpo-8x22B
null
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "conversational", "base_model:alpindale/WizardLM-2-8x22B", "base_model:HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:24:44+00:00
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr_v2_15 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr_v2_15", "results": []}]}
ssamperr/detr_v2_15
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:24:50+00:00
null
transformers
# Uploaded model - **Developed by:** akbargherbal - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
akbargherbal/think_tanks_v02_lora
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:25:25+00:00
text-generation
transformers
{}
jobvector/SFT_Llama-2-7b-hf_0.0001_57373Data_addEOSToken_1600ChPt
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:25:30+00:00
null
null
{}
nnheui/stablelm-2-1_6b-spin-dpo-2-full
null
[ "region:us" ]
null
2024-04-24T07:26:10+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3", "results": []}]}
AlignmentResearch/robust_llm_pythia-14m_mz-130_IMDB_n-its-10-seed-3
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:26:11+00:00
text-generation
transformers
# Model Card for Mistral-chem-v0.5 (mistral for chemistry) The Mistral-chem-v0.5 Large Language Model (LLM) is a pretrained generative chemical molecule model with 52.11M parameters x 8 experts = 416.9M parameters. It is derived from Mistral-7B-v0.1 model, which was simplified for chemistry: the number of layers and the hidden size were reduced. The model was pretrained using around 100M molecule SMILES strings from the Zinc database. For full details of this model please read our [github repo](https://github.com/raphaelmourad/Mistral-chem). ## Model Architecture Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Load the model from huggingface: ``` import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-chem-v0.5", trust_remote_code=True) model = AutoModel.from_pretrained("RaphaelMourad/Mistral-chem-v0.5", trust_remote_code=True) ``` ## Calculate the embedding of a DNA sequence ``` chem = "CCCCC[C@H](Br)CC" inputs = tokenizer(chem, return_tensors = 'pt')["input_ids"] hidden_states = model(inputs)[0] # [1, sequence_length, 256] # embedding with max pooling embedding_max = torch.max(hidden_states[0], dim=0)[0] print(embedding_max.shape) # expect to be 256 ``` ## Troubleshooting Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice Mistral-chem is a pretrained base model for chemistry. ## Contact Raphaël Mourad. [email protected]
{"license": "apache-2.0", "tags": ["pretrained", "Mistral", "chemistry"]}
RaphaelMourad/mixtral-chem-v0.5
null
[ "transformers", "safetensors", "mixtral", "text-generation", "pretrained", "Mistral", "chemistry", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:27:44+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/meraGPT/mera-mix-4x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/mera-mix-4x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q2_K.gguf) | Q2_K | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ3_XS.gguf) | IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q3_K_S.gguf) | Q3_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ3_M.gguf) | IQ3_M | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q3_K_L.gguf) | Q3_K_L | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.IQ4_XS.gguf) | IQ4_XS | 13.1 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q5_K_S.gguf) | Q5_K_S | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q5_K_M.gguf) | Q5_K_M | 17.2 | | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q6_K.gguf) | Q6_K | 19.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/mera-mix-4x7B-GGUF/resolve/main/mera-mix-4x7B.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "meraGPT/mera-mix-4x7B", "quantized_by": "mradermacher"}
mradermacher/mera-mix-4x7B-GGUF
null
[ "transformers", "gguf", "en", "base_model:meraGPT/mera-mix-4x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:28:35+00:00
null
null
¿Qué es Crystalin tabletas? Crystalin Precio es una cápsula de suplemento dietético de primera calidad, meticulosamente elaborada para brindar un apoyo integral a la salud ocular. Su fórmula avanzada contiene una mezcla sinérgica de vitaminas, minerales y antioxidantes elegidos específicamente para nutrir los ojos y protegerlos contra el estrés oxidativo. Página web oficial:<a href="https://www.nutritionsee.com/Crystaseucdor">www.Crystalin.com</a> <p><a href="https://www.nutritionsee.com/Crystaseucdor"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Crystalin-Ecuador-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/Crystaseucdor">¡¡Comprar ahora!! Haga clic en el enlace a continuación para obtener más información y obtener un 50% de descuento ahora... ¡Date prisa!</a> Página web oficial:<a href="https://www.nutritionsee.com/Crystaseucdor">www.Crystalin.com</a>
{"license": "apache-2.0"}
CrystalinEcuador/Crystalin
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T07:29:40+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-G3 This model is a fine-tuned version of [ChakuChidiya/distilbert-base-uncased-G2](https://huggingface.co/ChakuChidiya/distilbert-base-uncased-G2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2192 - Validation Loss: 0.3240 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1920, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.07} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3628 | 0.3204 | 0 | | 0.2708 | 0.3328 | 1 | | 0.2192 | 0.3240 | 2 | ### Framework versions - Transformers 4.37.0 - TensorFlow 2.15.0 - Datasets 2.14.5 - Tokenizers 0.15.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "ChakuChidiya/distilbert-base-uncased-G2", "model-index": [{"name": "distilbert-base-uncased-G3", "results": []}]}
ChakuChidiya/distilbert-base-uncased-G3
null
[ "transformers", "tf", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:ChakuChidiya/distilbert-base-uncased-G2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:31:05+00:00
text2text-generation
transformers
# PLLaVA Model Card ## Model details **Model type:** PLLaVA-7B is an open-source video-language chatbot trained by fine-tuning Image-LLM on video instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: llava-hf/llava-v1.6-vicuna-7b-hf **Model date:** PLLaVA-7B was trained in April 2024. **Paper or resources for more information:** - github repo: https://github.com/magic-research/PLLaVA - project page: https://pllava.github.io/ - paper link: https://arxiv.org/abs/2404.16994 ## License llava-hf/llava-v1.6-vicuna-7b-hf license. **Where to send questions or comments about the model:** https://github.com/magic-research/PLLaVA/issues ## Intended use **Primary intended uses:** The primary use of PLLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset Video-Instruct-Tuning data of OpenGVLab/VideoChat2-IT ## Evaluation dataset A collection of 6 benchmarks, including 5 VQA benchmarks and 1 recent benchmarks specifically proposed for Video-LMMs.
{"license": "apache-2.0", "tags": ["video LLM"], "datasets": ["OpenGVLab/VideoChat2-IT"]}
ermu2001/pllava-7b
null
[ "transformers", "safetensors", "llava", "text2text-generation", "video LLM", "dataset:OpenGVLab/VideoChat2-IT", "arxiv:2404.16994", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-24T07:31:24+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
josianem/adareceipts-donut-model-cordv2
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:31:36+00:00
null
transformers
# Uploaded model - **Developed by:** Anpur-Phani - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"}
Anpur-Phani/gemma_lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:33:43+00:00
null
null
{"license": "mit"}
imvbhuvan/falcon-aspireai
null
[ "license:mit", "region:us" ]
null
2024-04-24T07:34:21+00:00
null
null
{}
baotl/test_01
null
[ "region:us" ]
null
2024-04-24T07:34:46+00:00
visual-question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vilt_finetuned_200 This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["vqa"], "base_model": "dandelin/vilt-b32-mlm", "model-index": [{"name": "vilt_finetuned_200", "results": []}]}
yeongha/vilt_finetuned_200
null
[ "transformers", "tensorboard", "safetensors", "vilt", "visual-question-answering", "generated_from_trainer", "dataset:vqa", "base_model:dandelin/vilt-b32-mlm", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:36:16+00:00
null
null
--- license: apache-2.0 JALKJGLAJGLJAPGJLAKJEDG
{"language": ["aa"]}
xumeng/888
null
[ "aa", "region:us" ]
null
2024-04-24T07:36:21+00:00
null
null
{}
curiosity29/test_diffusion_24_4
null
[ "region:us" ]
null
2024-04-24T07:36:44+00:00
null
null
{}
paraffa/melotts-model-yoo-v1
null
[ "region:us" ]
null
2024-04-24T07:37:27+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0009 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-hf", "model-index": [{"name": "billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt", "results": []}]}
Farjfar/billm_conll2003_NousResearch-Llama-2-7b-hf_ckpt
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-hf", "region:us" ]
null
2024-04-24T07:37:31+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc73
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:38:12+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppi_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4819 - Accuracy: 0.9333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5692 | 1.0 | 53424 | 0.4819 | 0.9333 | ### Framework versions - Transformers 4.39.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "ppi_model", "results": []}]}
lamiaaMB/ppi_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:39:11+00:00
null
null
{}
Moon-Ahn/phi2_q4f16-MLC
null
[ "region:us" ]
null
2024-04-24T07:39:19+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Text-to-image finetuning - happynear/sdxl-pokemon-model This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **reach-vb/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute Sundar Pichai creature: ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "inference": true}
happynear/sdxl-pokemon-model
null
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-24T07:39:48+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
kangXn/engu-st-mde
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:39:54+00:00
null
transformers
# DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF This model was converted to GGUF format from [`Aratako/Antler-7B-Novel-Writing`](https://huggingface.co/Aratako/Antler-7B-Novel-Writing) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Aratako/Antler-7B-Novel-Writing) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF --model antler-7b-novel-writing.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF --model antler-7b-novel-writing.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m antler-7b-novel-writing.Q6_K.gguf -n 128 ```
{"language": ["ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"], "datasets": ["Aratako/Syosetu711K-Cleaned-158K-Instruct"], "base_model": ["Elizezen/Antler-7B"]}
DavidAU/Antler-7B-Novel-Writing-Q6_K-GGUF
null
[ "transformers", "gguf", "not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo", "ja", "dataset:Aratako/Syosetu711K-Cleaned-158K-Instruct", "base_model:Elizezen/Antler-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:40:01+00:00
null
transformers
# Uploaded model - **Developed by:** srikar-v05 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
srikar-v05/llama3-ChatDoctor
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:40:48+00:00
null
null
{"license": "mit"}
Gauravkj012002/project11
null
[ "license:mit", "region:us" ]
null
2024-04-24T07:41:29+00:00
null
null
{}
NapthaAI/moar_agents_prediction
null
[ "region:us" ]
null
2024-04-24T07:41:45+00:00
text-generation
transformers
{}
snunlp/continual_llama
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:42:28+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1", "results": []}]}
AlignmentResearch/robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-1
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:42:34+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vietnamese-news-summarization-vistral-7b This model is a fine-tuned version of [Viet-Mistral/Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.8576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0431 | 0.0060 | 20 | 2.0914 | | 2.0513 | 0.0119 | 40 | 2.0405 | | 2.0366 | 0.0179 | 60 | 1.9899 | | 1.946 | 0.0238 | 80 | 1.9301 | | 1.9324 | 0.0298 | 100 | 1.8576 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.16.0 - Tokenizers 0.19.1
{"license": "afl-3.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "Viet-Mistral/Vistral-7B-Chat", "model-index": [{"name": "vietnamese-news-summarization-vistral-7b", "results": []}]}
anhvu2501/vietnamese-news-summarization-vistral-7b
null
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:Viet-Mistral/Vistral-7B-Chat", "license:afl-3.0", "region:us" ]
null
2024-04-24T07:43:20+00:00
null
transformers
# Uploaded model - **Developed by:** aidiary - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
aidiary/llama3-8b-alpaca-finetuned
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:43:22+00:00
null
transformers
# Uploaded model - **Developed by:** akbargherbal - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
akbargherbal/think_tanks_v02_gguf
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:44:06+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs", "results": []}]}
tedad09/PolizzeDonut-UltimaProvaCluster-Cluster3di7-5epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:44:19+00:00
text-classification
transformers
{"language": ["en"], "library_name": "transformers", "datasets": ["ifmain/text-moderation"]}
invalidexception/safetybert
null
[ "transformers", "safetensors", "bert", "text-classification", "en", "dataset:ifmain/text-moderation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:44:20+00:00
null
transformers
# DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF This model was converted to GGUF format from [`nbeerbower/llama-3-dragonmaid-8B`](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nbeerbower/llama-3-dragonmaid-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF --model llama-3-dragonmaid-8b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF --model llama-3-dragonmaid-8b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-dragonmaid-8b.Q8_0.gguf -n 128 ```
{"license": "other", "library_name": "transformers", "tags": ["nsfw", "not-for-all-audiences", "experimental", "llama-cpp", "gguf-my-repo"], "datasets": ["ResplendentAI/NSFW_RP_Format_NoQuote"], "base_model": ["nbeerbower/llama-3-sauce-v1-8B"], "license_name": "llama3"}
DavidAU/llama-3-dragonmaid-8B-Q8_0-GGUF
null
[ "transformers", "gguf", "nsfw", "not-for-all-audiences", "experimental", "llama-cpp", "gguf-my-repo", "dataset:ResplendentAI/NSFW_RP_Format_NoQuote", "base_model:nbeerbower/llama-3-sauce-v1-8B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:44:31+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0 This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0", "results": []}]}
AlignmentResearch/robust_llm_pythia-31m_mz-130_IMDB_n-its-10-seed-0
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-31m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T07:44:34+00:00
null
null
{}
gingercake01/repo005medium
null
[ "region:us" ]
null
2024-04-24T07:45:26+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
chohi/llama-3-8b-chat-molit-kor
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T07:46:50+00:00
null
keras
{"language": ["en"], "license": "mit", "library_name": "keras", "tags": ["code"]}
PuranjayB/CrashAware
null
[ "keras", "code", "en", "license:mit", "region:us" ]
null
2024-04-24T07:47:11+00:00
null
null
Q6_K gguf of https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B
{}
JayhC/ChaoticSoliloquy-4x8B-GGUF-Q6_K
null
[ "gguf", "region:us" ]
null
2024-04-24T07:47:17+00:00
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
{"license": "apache-2.0"}
GreenBitAI/Phi-3-mini-4k-instruct-layer-mix-bpw-3.0
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:47:32+00:00
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
{"license": "apache-2.0"}
GreenBitAI/Phi-3-mini-4k-instruct-layer-mix-bpw-2.5
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:47:45+00:00
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
{"license": "apache-2.0"}
GreenBitAI/Phi-3-mini-4k-instruct-layer-mix-bpw-2.2
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:47:56+00:00
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
{"license": "apache-2.0"}
GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-2.2
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:48:04+00:00
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
{"license": "apache-2.0"}
GreenBitAI/Phi-3-mini-128k-instruct-layer-mix-bpw-2.5
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T07:48:13+00:00